Jan 23 23:52:38.875420 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:52:38.875467 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:52:38.875509 kernel: KASLR enabled Jan 23 23:52:38.875526 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 23:52:38.875538 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 23 23:52:38.875550 kernel: random: crng init done Jan 23 23:52:38.875565 kernel: ACPI: Early table checksum verification disabled Jan 23 23:52:38.875578 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 23:52:38.875591 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 23:52:38.875608 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875621 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875633 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875646 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875659 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875674 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875690 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875704 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875718 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 23:52:38.875728 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 23:52:38.875735 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 23:52:38.875741 kernel: NUMA: Failed to initialise from firmware Jan 23 23:52:38.875747 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 23:52:38.877949 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 23 23:52:38.877965 kernel: Zone ranges: Jan 23 23:52:38.877972 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:52:38.877984 kernel: DMA32 empty Jan 23 23:52:38.877990 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 23:52:38.877996 kernel: Movable zone start for each node Jan 23 23:52:38.878003 kernel: Early memory node ranges Jan 23 23:52:38.878010 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 23 23:52:38.878016 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 23:52:38.878022 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 23:52:38.878029 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 23:52:38.878035 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 23:52:38.878042 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 23:52:38.878048 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 23:52:38.878055 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 23:52:38.878063 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 23:52:38.878069 kernel: psci: probing for conduit method from ACPI. Jan 23 23:52:38.878076 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:52:38.878342 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:52:38.878351 kernel: psci: Trusted OS migration not required Jan 23 23:52:38.878359 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:52:38.878404 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 23:52:38.878412 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:52:38.878419 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:52:38.878426 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:52:38.878433 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:52:38.878440 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:52:38.878446 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:52:38.878453 kernel: CPU features: detected: Spectre-v4 Jan 23 23:52:38.878460 kernel: CPU features: detected: Spectre-BHB Jan 23 23:52:38.878467 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:52:38.878476 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:52:38.878483 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:52:38.878490 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:52:38.878497 kernel: alternatives: applying boot alternatives Jan 23 23:52:38.878506 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:38.878513 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:52:38.878520 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:52:38.878527 kernel: Fallback order for Node 0: 0 Jan 23 23:52:38.878534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 23 23:52:38.878540 kernel: Policy zone: Normal Jan 23 23:52:38.878547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:52:38.878555 kernel: software IO TLB: area num 2. Jan 23 23:52:38.878562 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 23 23:52:38.878569 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 23 23:52:38.878576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:52:38.878583 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:52:38.878591 kernel: rcu: RCU event tracing is enabled. Jan 23 23:52:38.878598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:52:38.878605 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:52:38.878611 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:52:38.878618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:52:38.878625 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:52:38.878632 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:52:38.878640 kernel: GICv3: 256 SPIs implemented Jan 23 23:52:38.878647 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:52:38.878654 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:52:38.878661 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 23:52:38.878668 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 23:52:38.878674 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 23:52:38.878681 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:52:38.878688 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:52:38.878695 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 23 23:52:38.878702 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 23 23:52:38.878709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:52:38.878718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:38.878724 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:52:38.878731 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:52:38.878738 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:52:38.878746 kernel: Console: colour dummy device 80x25 Jan 23 23:52:38.880806 kernel: ACPI: Core revision 20230628 Jan 23 23:52:38.880828 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:52:38.880836 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:52:38.880843 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:52:38.880850 kernel: landlock: Up and running. Jan 23 23:52:38.880861 kernel: SELinux: Initializing. Jan 23 23:52:38.880869 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:38.880876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:38.880883 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:38.880890 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:38.880897 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:52:38.880905 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:52:38.880912 kernel: Platform MSI: ITS@0x8080000 domain created Jan 23 23:52:38.880919 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 23 23:52:38.880928 kernel: Remapping and enabling EFI services. Jan 23 23:52:38.880935 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:52:38.880942 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:52:38.880950 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 23:52:38.880957 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 23 23:52:38.880964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:38.880971 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:52:38.880978 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:52:38.880985 kernel: SMP: Total of 2 processors activated. Jan 23 23:52:38.880994 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:52:38.881001 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:52:38.881009 kernel: CPU features: detected: Common not Private translations Jan 23 23:52:38.881022 kernel: CPU features: detected: CRC32 instructions Jan 23 23:52:38.881030 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 23:52:38.881038 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:52:38.881045 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:52:38.881052 kernel: CPU features: detected: Privileged Access Never Jan 23 23:52:38.881060 kernel: CPU features: detected: RAS Extension Support Jan 23 23:52:38.881069 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 23:52:38.881077 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:52:38.881084 kernel: alternatives: applying system-wide alternatives Jan 23 23:52:38.881092 kernel: devtmpfs: initialized Jan 23 23:52:38.881099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:52:38.881107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:52:38.881114 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:52:38.881121 kernel: SMBIOS 3.0.0 present. Jan 23 23:52:38.881130 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 23:52:38.881138 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:52:38.881145 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:52:38.881153 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:52:38.881160 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:52:38.881168 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:52:38.881175 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Jan 23 23:52:38.881183 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:52:38.881190 kernel: cpuidle: using governor menu Jan 23 23:52:38.881199 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:52:38.881207 kernel: ASID allocator initialised with 32768 entries Jan 23 23:52:38.881214 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:52:38.881221 kernel: Serial: AMBA PL011 UART driver Jan 23 23:52:38.881229 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:52:38.881236 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:52:38.881244 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:52:38.881251 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:52:38.881258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:52:38.881267 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:52:38.881275 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:52:38.881282 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:52:38.881290 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:52:38.881297 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:52:38.881305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:52:38.881312 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:52:38.881319 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:52:38.881327 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:52:38.881335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:52:38.881343 kernel: ACPI: Interpreter enabled Jan 23 23:52:38.881350 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:52:38.881358 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:52:38.881376 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:52:38.881385 kernel: printk: console [ttyAMA0] enabled Jan 23 23:52:38.881392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 23:52:38.881541 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:52:38.881617 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:52:38.881685 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:52:38.881789 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 23:52:38.881862 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 23:52:38.881872 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 23:52:38.881880 kernel: PCI host bridge to bus 0000:00 Jan 23 23:52:38.881955 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 23:52:38.882028 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:52:38.882088 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 23:52:38.882146 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 23:52:38.882230 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 23 23:52:38.882305 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 23 23:52:38.882389 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 23 23:52:38.882463 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 23:52:38.882541 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.882608 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 23 23:52:38.882682 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.882751 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 23 23:52:38.884968 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.885040 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 23 23:52:38.885119 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.885186 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 23 23:52:38.885259 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.885350 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 23 23:52:38.885438 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.885503 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 23 23:52:38.885580 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.885645 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 23 23:52:38.885722 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.887858 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 23 23:52:38.887953 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 23 23:52:38.888021 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 23 23:52:38.888101 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 23 23:52:38.888168 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 23 23:52:38.888246 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 23 23:52:38.888318 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 23 23:52:38.888412 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 23:52:38.888487 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 23 23:52:38.888564 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 23 23:52:38.888639 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 23 23:52:38.888718 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 23 23:52:38.890880 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 23 23:52:38.890968 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 23:52:38.891049 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 23 23:52:38.891121 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 23:52:38.891204 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 23 23:52:38.891273 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 23 23:52:38.891342 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 23:52:38.891474 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 23 23:52:38.891549 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 23 23:52:38.891625 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 23:52:38.891708 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 23 23:52:38.893618 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 23 23:52:38.893707 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 23:52:38.893819 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 23 23:52:38.893895 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 23:52:38.893961 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 23:52:38.894027 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 23:52:38.894102 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 23:52:38.894175 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 23:52:38.894241 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 23:52:38.894310 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 23:52:38.894393 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 23:52:38.894466 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 23:52:38.894537 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 23:52:38.894615 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 23:52:38.894687 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 23:52:38.894779 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 23:52:38.894853 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 23:52:38.894919 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 23:52:38.894989 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 23:52:38.895056 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 23:52:38.895121 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 23:52:38.895193 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 23:52:38.895259 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 23:52:38.895323 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 23:52:38.895410 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 23:52:38.895479 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 23:52:38.895544 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 23:52:38.895614 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 23:52:38.895679 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 23:52:38.895749 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 23:52:38.897902 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 23 23:52:38.897975 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 23:52:38.898043 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 23 23:52:38.898109 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 23:52:38.898176 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 23 23:52:38.898249 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 23:52:38.898315 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 23 23:52:38.898400 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 23:52:38.898474 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 23 23:52:38.898541 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 23:52:38.898607 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 23 23:52:38.898672 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 23:52:38.898743 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 23 23:52:38.898827 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 23:52:38.898895 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 23 23:52:38.898959 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 23:52:38.899026 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 23 23:52:38.899090 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 23:52:38.899161 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 23 23:52:38.899231 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 23 23:52:38.899296 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 23 23:52:38.899363 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 23 23:52:38.899443 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 23 23:52:38.899509 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 23 23:52:38.899575 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 23 23:52:38.899640 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 23 23:52:38.899704 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 23 23:52:38.900450 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 23 23:52:38.900598 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 23 23:52:38.900712 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 23 23:52:38.900919 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 23 23:52:38.901045 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 23 23:52:38.901174 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 23 23:52:38.901282 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 23 23:52:38.901444 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 23 23:52:38.901569 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 23 23:52:38.901677 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 23 23:52:38.901804 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 23 23:52:38.901922 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 23 23:52:38.902040 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 23 23:52:38.902151 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 23:52:38.902261 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 23 23:52:38.902378 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 23:52:38.902493 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 23:52:38.902598 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 23:52:38.902700 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 23:52:38.905149 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 23 23:52:38.905261 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 23:52:38.905345 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 23:52:38.905490 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 23:52:38.905587 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 23:52:38.905681 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 23:52:38.905917 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 23 23:52:38.906022 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 23:52:38.906107 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 23:52:38.906201 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 23:52:38.906286 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 23:52:38.906443 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 23:52:38.906550 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 23:52:38.906640 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 23:52:38.906721 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 23:52:38.906858 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 23:52:38.906942 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 23 23:52:38.907018 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 23 23:52:38.907086 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 23:52:38.907152 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 23:52:38.907247 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 23:52:38.907331 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 23:52:38.907426 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 23 23:52:38.907498 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 23 23:52:38.907566 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 23:52:38.907637 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 23:52:38.907708 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 23:52:38.909866 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 23:52:38.909972 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 23 23:52:38.910052 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 23 23:52:38.910127 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 23 23:52:38.910199 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 23:52:38.910265 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 23:52:38.910338 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 23:52:38.910423 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 23:52:38.910497 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 23:52:38.910564 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 23:52:38.910635 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 23:52:38.910703 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 23:52:38.911880 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 23:52:38.911971 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 23:52:38.912046 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 23:52:38.912114 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 23:52:38.912183 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 23:52:38.912243 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:52:38.912300 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 23:52:38.912390 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 23:52:38.912454 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 23:52:38.912518 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 23:52:38.912587 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 23:52:38.912650 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 23:52:38.912710 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 23:52:38.912791 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 23:52:38.912853 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 23:52:38.912916 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 23:52:38.912990 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 23:52:38.913053 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 23:52:38.913136 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 23:52:38.913219 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 23:52:38.913288 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 23:52:38.913361 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 23:52:38.913444 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 23:52:38.914632 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 23:52:38.914739 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 23:52:38.914874 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 23:52:38.914947 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 23:52:38.915041 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 23:52:38.915118 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 23:52:38.917569 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 23:52:38.917645 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 23:52:38.917717 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 23:52:38.917815 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 23:52:38.917883 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 23:52:38.917893 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:52:38.917901 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:52:38.917909 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:52:38.917917 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:52:38.917925 kernel: iommu: Default domain type: Translated Jan 23 23:52:38.917933 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:52:38.917941 kernel: efivars: Registered efivars operations Jan 23 23:52:38.917953 kernel: vgaarb: loaded Jan 23 23:52:38.917967 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:52:38.917979 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:52:38.917989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:52:38.917997 kernel: pnp: PnP ACPI init Jan 23 23:52:38.918858 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 23:52:38.918882 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:52:38.918892 kernel: NET: Registered PF_INET protocol family Jan 23 23:52:38.918900 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:52:38.918914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:52:38.918922 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:52:38.918930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:52:38.918938 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:52:38.918946 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:52:38.918954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:38.918962 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:38.918970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:52:38.919061 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 23:52:38.919075 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:52:38.919083 kernel: kvm [1]: HYP mode not available Jan 23 23:52:38.919091 kernel: Initialise system trusted keyrings Jan 23 23:52:38.919099 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:52:38.919107 kernel: Key type asymmetric registered Jan 23 23:52:38.919115 kernel: Asymmetric key parser 'x509' registered Jan 23 23:52:38.919122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:52:38.919130 kernel: io scheduler mq-deadline registered Jan 23 23:52:38.919138 kernel: io scheduler kyber registered Jan 23 23:52:38.919148 kernel: io scheduler bfq registered Jan 23 23:52:38.919156 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:52:38.919235 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 23:52:38.919310 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 23:52:38.919403 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.919481 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 23:52:38.919552 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 23:52:38.919625 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.919699 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 23:52:38.921823 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 23:52:38.921930 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.922004 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 23:52:38.922075 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 23:52:38.922151 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.922225 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 23:52:38.922294 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 23:52:38.922360 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.922461 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 23:52:38.922533 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 23:52:38.922606 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.922679 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 23:52:38.922746 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 23:52:38.924031 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.924113 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 23:52:38.924184 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 23:52:38.924258 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.924269 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 23:52:38.924336 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 23:52:38.924422 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 23:52:38.924492 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 23:52:38.924503 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:52:38.924516 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:52:38.924524 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:52:38.924599 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 23:52:38.924674 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 23:52:38.924686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:52:38.924694 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:52:38.924774 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 23:52:38.924785 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 23:52:38.924794 kernel: thunder_xcv, ver 1.0 Jan 23 23:52:38.924808 kernel: thunder_bgx, ver 1.0 Jan 23 23:52:38.924816 kernel: nicpf, ver 1.0 Jan 23 23:52:38.924824 kernel: nicvf, ver 1.0 Jan 23 23:52:38.924904 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:52:38.924968 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:52:38 UTC (1769212358) Jan 23 23:52:38.924978 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:52:38.924987 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 23 23:52:38.924995 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:52:38.925005 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:52:38.925013 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:52:38.925021 kernel: Segment Routing with IPv6 Jan 23 23:52:38.925029 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:52:38.925037 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:52:38.925044 kernel: Key type dns_resolver registered Jan 23 23:52:38.925053 kernel: registered taskstats version 1 Jan 23 23:52:38.925061 kernel: Loading compiled-in X.509 certificates Jan 23 23:52:38.925069 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:52:38.925079 kernel: Key type .fscrypt registered Jan 23 23:52:38.925087 kernel: Key type fscrypt-provisioning registered Jan 23 23:52:38.925094 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:52:38.925102 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:52:38.925110 kernel: ima: No architecture policies found Jan 23 23:52:38.925118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:52:38.925126 kernel: clk: Disabling unused clocks Jan 23 23:52:38.925134 kernel: Freeing unused kernel memory: 39424K Jan 23 23:52:38.925141 kernel: Run /init as init process Jan 23 23:52:38.925151 kernel: with arguments: Jan 23 23:52:38.925158 kernel: /init Jan 23 23:52:38.925166 kernel: with environment: Jan 23 23:52:38.925174 kernel: HOME=/ Jan 23 23:52:38.925181 kernel: TERM=linux Jan 23 23:52:38.925191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:38.925201 systemd[1]: Detected virtualization kvm. Jan 23 23:52:38.925210 systemd[1]: Detected architecture arm64. Jan 23 23:52:38.925219 systemd[1]: Running in initrd. Jan 23 23:52:38.925227 systemd[1]: No hostname configured, using default hostname. Jan 23 23:52:38.925235 systemd[1]: Hostname set to . Jan 23 23:52:38.925243 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:52:38.925252 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:52:38.925260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:38.925269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:38.925278 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:52:38.925288 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:38.925297 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:52:38.925305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:52:38.925322 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:52:38.925331 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:52:38.925339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:38.925348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:38.925359 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:38.925377 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:38.925386 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:38.925394 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:38.925402 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:38.925411 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:38.925419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:38.925427 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:38.925437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:38.925445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:38.925454 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:38.925462 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:38.925470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:52:38.925479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:38.925488 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:52:38.925496 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:52:38.925505 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:38.925515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:38.925543 systemd-journald[237]: Collecting audit messages is disabled. Jan 23 23:52:38.925564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:38.925573 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:38.925583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:38.925591 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:52:38.925600 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:38.925608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:38.925618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:38.925627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:52:38.925635 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:38.925644 kernel: Bridge firewalling registered Jan 23 23:52:38.925652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:38.925661 systemd-journald[237]: Journal started Jan 23 23:52:38.925681 systemd-journald[237]: Runtime Journal (/run/log/journal/202fd767e98f4f8a8d2105ac126d994e) is 8.0M, max 76.6M, 68.6M free. Jan 23 23:52:38.897004 systemd-modules-load[238]: Inserted module 'overlay' Jan 23 23:52:38.928699 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:38.920175 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 23 23:52:38.928525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:38.929604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:38.938255 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:52:38.943797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:38.947122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:38.952310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:38.963180 dracut-cmdline[259]: dracut-dracut-053 Jan 23 23:52:38.970509 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:38.973645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:38.974862 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:38.982976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:39.009257 systemd-resolved[292]: Positive Trust Anchors: Jan 23 23:52:39.009935 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:39.009969 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:39.019614 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 23 23:52:39.020862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:39.023050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:39.040830 kernel: SCSI subsystem initialized Jan 23 23:52:39.045793 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:52:39.053809 kernel: iscsi: registered transport (tcp) Jan 23 23:52:39.066799 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:52:39.066861 kernel: QLogic iSCSI HBA Driver Jan 23 23:52:39.116726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:39.121984 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:52:39.143110 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:52:39.143246 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:52:39.143279 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:52:39.191839 kernel: raid6: neonx8 gen() 15681 MB/s Jan 23 23:52:39.208824 kernel: raid6: neonx4 gen() 15379 MB/s Jan 23 23:52:39.225796 kernel: raid6: neonx2 gen() 13083 MB/s Jan 23 23:52:39.242826 kernel: raid6: neonx1 gen() 10363 MB/s Jan 23 23:52:39.259802 kernel: raid6: int64x8 gen() 6868 MB/s Jan 23 23:52:39.276844 kernel: raid6: int64x4 gen() 7236 MB/s Jan 23 23:52:39.293805 kernel: raid6: int64x2 gen() 6068 MB/s Jan 23 23:52:39.310832 kernel: raid6: int64x1 gen() 5033 MB/s Jan 23 23:52:39.310927 kernel: raid6: using algorithm neonx8 gen() 15681 MB/s Jan 23 23:52:39.327823 kernel: raid6: .... xor() 11799 MB/s, rmw enabled Jan 23 23:52:39.327914 kernel: raid6: using neon recovery algorithm Jan 23 23:52:39.333113 kernel: xor: measuring software checksum speed Jan 23 23:52:39.333177 kernel: 8regs : 19764 MB/sec Jan 23 23:52:39.333195 kernel: 32regs : 19664 MB/sec Jan 23 23:52:39.333948 kernel: arm64_neon : 27141 MB/sec Jan 23 23:52:39.334016 kernel: xor: using function: arm64_neon (27141 MB/sec) Jan 23 23:52:39.384815 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:52:39.401130 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:39.406913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:39.421323 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 23 23:52:39.424781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:39.433933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:52:39.448894 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 23 23:52:39.487497 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:39.494987 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:39.542190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:39.551931 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:52:39.569901 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:39.572163 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:39.575550 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:39.577021 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:39.588007 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:52:39.601717 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:39.652915 kernel: scsi host0: Virtio SCSI HBA Jan 23 23:52:39.659060 kernel: ACPI: bus type USB registered Jan 23 23:52:39.659108 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 23:52:39.660889 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 23:52:39.665803 kernel: usbcore: registered new interface driver usbfs Jan 23 23:52:39.666143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:39.669226 kernel: usbcore: registered new interface driver hub Jan 23 23:52:39.669251 kernel: usbcore: registered new device driver usb Jan 23 23:52:39.666252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:39.671312 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:39.672088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:39.672231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:39.673437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:39.684022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:39.701784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:39.713189 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 23:52:39.714957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:39.719868 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 23:52:39.720117 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:52:39.721234 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 23:52:39.727872 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 23:52:39.728050 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 23:52:39.728139 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 23:52:39.728233 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 23:52:39.729781 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 23:52:39.730892 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 23:52:39.731042 kernel: hub 1-0:1.0: USB hub found Jan 23 23:52:39.732842 kernel: hub 1-0:1.0: 4 ports detected Jan 23 23:52:39.732992 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 23:52:39.734801 kernel: hub 2-0:1.0: USB hub found Jan 23 23:52:39.734964 kernel: hub 2-0:1.0: 4 ports detected Jan 23 23:52:39.741939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:39.746880 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 23:52:39.748186 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 23:52:39.748345 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 23:52:39.748455 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 23:52:39.749299 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 23:52:39.753581 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:52:39.753620 kernel: GPT:17805311 != 80003071 Jan 23 23:52:39.753632 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:52:39.754026 kernel: GPT:17805311 != 80003071 Jan 23 23:52:39.755436 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:52:39.755505 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:39.756775 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 23:52:39.793828 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (513) Jan 23 23:52:39.797149 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (503) Jan 23 23:52:39.802343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 23:52:39.810322 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 23:52:39.816230 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 23:52:39.817898 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 23:52:39.830664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 23:52:39.841960 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:52:39.849937 disk-uuid[575]: Primary Header is updated. Jan 23 23:52:39.849937 disk-uuid[575]: Secondary Entries is updated. Jan 23 23:52:39.849937 disk-uuid[575]: Secondary Header is updated. Jan 23 23:52:39.855778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:39.862772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:39.972805 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 23:52:40.113231 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 23:52:40.113303 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 23:52:40.113964 kernel: usbcore: registered new interface driver usbhid Jan 23 23:52:40.114779 kernel: usbhid: USB HID core driver Jan 23 23:52:40.217861 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 23:52:40.348801 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 23:52:40.402822 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 23:52:40.873780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:40.875490 disk-uuid[576]: The operation has completed successfully. Jan 23 23:52:40.921406 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:52:40.922793 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:52:40.938090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:52:40.944954 sh[594]: Success Jan 23 23:52:40.960463 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:52:41.014148 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:52:41.022446 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:52:41.024783 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:52:41.042496 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:52:41.042552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:41.042573 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:52:41.042592 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:52:41.043026 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:52:41.050806 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:52:41.053044 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:52:41.054856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:52:41.061018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:52:41.066607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:52:41.079085 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:41.079137 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:41.079151 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:41.084236 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 23:52:41.084298 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:41.095468 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:52:41.097063 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:41.102051 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:52:41.109004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:52:41.195901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:41.205589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:52:41.212691 ignition[672]: Ignition 2.19.0 Jan 23 23:52:41.213443 ignition[672]: Stage: fetch-offline Jan 23 23:52:41.213494 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:41.213503 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:41.213683 ignition[672]: parsed url from cmdline: "" Jan 23 23:52:41.216011 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:41.213687 ignition[672]: no config URL provided Jan 23 23:52:41.213692 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:41.213701 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:41.213707 ignition[672]: failed to fetch config: resource requires networking Jan 23 23:52:41.213974 ignition[672]: Ignition finished successfully Jan 23 23:52:41.234378 systemd-networkd[786]: lo: Link UP Jan 23 23:52:41.234391 systemd-networkd[786]: lo: Gained carrier Jan 23 23:52:41.236654 systemd-networkd[786]: Enumeration completed Jan 23 23:52:41.237264 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:41.237268 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:41.237447 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:52:41.238626 systemd[1]: Reached target network.target - Network. Jan 23 23:52:41.238860 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:41.238865 systemd-networkd[786]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:41.239605 systemd-networkd[786]: eth0: Link UP Jan 23 23:52:41.239608 systemd-networkd[786]: eth0: Gained carrier Jan 23 23:52:41.239616 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:41.247397 systemd-networkd[786]: eth1: Link UP Jan 23 23:52:41.247401 systemd-networkd[786]: eth1: Gained carrier Jan 23 23:52:41.247409 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:41.249456 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:52:41.263564 ignition[789]: Ignition 2.19.0 Jan 23 23:52:41.264728 ignition[789]: Stage: fetch Jan 23 23:52:41.265120 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:41.265141 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:41.265322 ignition[789]: parsed url from cmdline: "" Jan 23 23:52:41.265329 ignition[789]: no config URL provided Jan 23 23:52:41.265619 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:41.265647 ignition[789]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:41.265677 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 23:52:41.266307 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 23:52:41.285336 systemd-networkd[786]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 23:52:41.302925 systemd-networkd[786]: eth0: DHCPv4 address 49.13.80.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 23:52:41.467437 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 23:52:41.473939 ignition[789]: GET result: OK Jan 23 23:52:41.474039 ignition[789]: parsing config with SHA512: 8977ee3c9b382d8ed47bdecd4be137f84b6a5e092a7c6a59a14916d5c777d6674451ebeee0c02b99d95a726ea5c00e03d67333be12d88a61f6f99e1853abfbdd Jan 23 23:52:41.479590 unknown[789]: fetched base config from "system" Jan 23 23:52:41.479608 unknown[789]: fetched base config from "system" Jan 23 23:52:41.480370 ignition[789]: fetch: fetch complete Jan 23 23:52:41.479615 unknown[789]: fetched user config from "hetzner" Jan 23 23:52:41.480376 ignition[789]: fetch: fetch passed Jan 23 23:52:41.480430 ignition[789]: Ignition finished successfully Jan 23 23:52:41.484232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:52:41.490992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:52:41.504578 ignition[796]: Ignition 2.19.0 Jan 23 23:52:41.504595 ignition[796]: Stage: kargs Jan 23 23:52:41.505396 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:41.505415 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:41.506455 ignition[796]: kargs: kargs passed Jan 23 23:52:41.509918 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:52:41.506523 ignition[796]: Ignition finished successfully Jan 23 23:52:41.517023 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:52:41.528431 ignition[802]: Ignition 2.19.0 Jan 23 23:52:41.528443 ignition[802]: Stage: disks Jan 23 23:52:41.528626 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:41.528636 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:41.529690 ignition[802]: disks: disks passed Jan 23 23:52:41.531556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:52:41.529739 ignition[802]: Ignition finished successfully Jan 23 23:52:41.533263 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:41.535122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:52:41.536313 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:52:41.537750 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:52:41.539317 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:52:41.544963 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:52:41.563818 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 23 23:52:41.568283 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:52:41.575951 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:52:41.648793 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:52:41.649199 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:52:41.650749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:52:41.662917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:41.665484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:52:41.667969 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:52:41.672972 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:52:41.676309 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Jan 23 23:52:41.674457 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:41.678513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:52:41.680379 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:41.680456 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:41.680483 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:41.684774 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 23:52:41.684815 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:41.685406 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:52:41.693328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:41.736568 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:52:41.741110 coreos-metadata[821]: Jan 23 23:52:41.740 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 23:52:41.744454 coreos-metadata[821]: Jan 23 23:52:41.743 INFO Fetch successful Jan 23 23:52:41.746584 coreos-metadata[821]: Jan 23 23:52:41.743 INFO wrote hostname ci-4081-3-6-n-417febb2dd to /sysroot/etc/hostname Jan 23 23:52:41.746381 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:41.752512 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:52:41.754790 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:52:41.759802 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:52:41.858718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:41.866928 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:52:41.872962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:52:41.879848 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:41.899602 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:52:41.904155 ignition[936]: INFO : Ignition 2.19.0 Jan 23 23:52:41.905364 ignition[936]: INFO : Stage: mount Jan 23 23:52:41.905799 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:41.905799 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:41.907035 ignition[936]: INFO : mount: mount passed Jan 23 23:52:41.907035 ignition[936]: INFO : Ignition finished successfully Jan 23 23:52:41.908984 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:52:41.916870 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:52:42.042155 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:52:42.047978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:42.060417 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 23 23:52:42.060497 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:42.060523 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:42.061778 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:42.064816 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 23:52:42.064868 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:42.067789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:42.098747 ignition[964]: INFO : Ignition 2.19.0 Jan 23 23:52:42.099540 ignition[964]: INFO : Stage: files Jan 23 23:52:42.100204 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:42.101823 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:42.102604 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:52:42.104297 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:52:42.104297 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:52:42.107286 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:52:42.108523 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:52:42.108523 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:52:42.107707 unknown[964]: wrote ssh authorized keys file for user: core Jan 23 23:52:42.111268 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:42.111268 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:42.111268 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:42.111268 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:42.193930 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:52:42.273060 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:42.273060 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:42.277921 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:42.375944 systemd-networkd[786]: eth1: Gained IPv6LL Jan 23 23:52:42.528444 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:42.781661 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:42.792016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:52:43.044978 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 23 23:52:43.208133 systemd-networkd[786]: eth0: Gained IPv6LL Jan 23 23:52:43.843653 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:43.843653 ignition[964]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 23 23:52:43.848863 ignition[964]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 23:52:43.861285 ignition[964]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 23:52:43.861285 ignition[964]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 23 23:52:43.861285 ignition[964]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:43.861285 ignition[964]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:43.861285 ignition[964]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:43.861285 ignition[964]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:43.861285 ignition[964]: INFO : files: files passed Jan 23 23:52:43.861285 ignition[964]: INFO : Ignition finished successfully Jan 23 23:52:43.856197 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:52:43.863978 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:52:43.867915 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:52:43.872449 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:52:43.872831 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:52:43.883260 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:43.883260 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:43.886047 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:43.888592 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:43.889859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:52:43.898042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:52:43.930117 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:52:43.930264 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:52:43.932925 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:52:43.935292 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:52:43.937936 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:52:43.943943 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:52:43.956639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:43.964031 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:52:43.981363 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:43.982180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:43.984303 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:52:43.986328 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:52:43.986486 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:43.988467 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:52:43.989172 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:52:43.990442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:52:43.991612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:43.992781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:43.994050 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:52:43.995260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:43.996505 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:52:43.997600 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:52:43.998817 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:52:43.999803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:52:43.999921 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:44.001371 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:44.002091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:44.003275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:52:44.004774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:44.005845 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:52:44.005955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:44.007828 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:52:44.007951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:44.009525 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:52:44.009620 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:52:44.010671 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:52:44.010778 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:44.020073 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:52:44.025395 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:52:44.026092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:52:44.026214 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:44.030064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:52:44.030162 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:44.037216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:52:44.039382 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:52:44.043110 ignition[1018]: INFO : Ignition 2.19.0 Jan 23 23:52:44.043110 ignition[1018]: INFO : Stage: umount Jan 23 23:52:44.043110 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:44.043110 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 23:52:44.047749 ignition[1018]: INFO : umount: umount passed Jan 23 23:52:44.047749 ignition[1018]: INFO : Ignition finished successfully Jan 23 23:52:44.044688 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:52:44.046795 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:52:44.048555 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:52:44.048598 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:52:44.049466 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:52:44.049504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:52:44.050342 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:52:44.050384 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:52:44.051427 systemd[1]: Stopped target network.target - Network. Jan 23 23:52:44.051956 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:52:44.052000 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:44.053850 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:52:44.054362 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:52:44.059291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:44.060143 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:52:44.061321 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:52:44.062483 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:52:44.062592 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:44.063719 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:52:44.063803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:44.064950 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:52:44.065009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:52:44.066425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:52:44.066471 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:44.067815 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:52:44.070508 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:52:44.073228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:52:44.073900 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:52:44.075678 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:52:44.076837 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:52:44.076926 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:44.077941 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 23 23:52:44.079911 systemd-networkd[786]: eth1: DHCPv6 lease lost Jan 23 23:52:44.081135 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:52:44.081910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:52:44.085000 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:52:44.085124 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:52:44.087461 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:52:44.087514 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:44.092927 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:52:44.093450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:52:44.093501 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:44.094621 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:52:44.094657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:44.096735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:52:44.096795 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:44.100017 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:52:44.100060 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:44.102837 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:44.114419 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:52:44.115788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:52:44.121449 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:52:44.121600 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:44.123205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:52:44.123244 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:44.124204 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:52:44.124232 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:44.126424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:52:44.126509 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:44.129270 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:52:44.129323 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:44.130860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:44.130898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:44.137931 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:52:44.138540 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:52:44.138588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:44.141289 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:52:44.141384 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:44.142298 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:52:44.142351 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:44.143662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:44.143705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:44.150400 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:52:44.150519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:52:44.151652 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:52:44.156871 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:52:44.166705 systemd[1]: Switching root. Jan 23 23:52:44.207814 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 23 23:52:44.207914 systemd-journald[237]: Journal stopped Jan 23 23:52:45.116064 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:52:45.116140 kernel: SELinux: policy capability open_perms=1 Jan 23 23:52:45.116155 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:52:45.116170 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:52:45.116180 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:52:45.116189 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:52:45.116199 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:52:45.116209 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:52:45.116226 kernel: audit: type=1403 audit(1769212364.425:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:52:45.116238 systemd[1]: Successfully loaded SELinux policy in 36.159ms. Jan 23 23:52:45.116257 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.478ms. Jan 23 23:52:45.116269 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:45.116280 systemd[1]: Detected virtualization kvm. Jan 23 23:52:45.116294 systemd[1]: Detected architecture arm64. Jan 23 23:52:45.116317 systemd[1]: Detected first boot. Jan 23 23:52:45.116328 systemd[1]: Hostname set to . Jan 23 23:52:45.116344 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:52:45.116356 zram_generator::config[1082]: No configuration found. Jan 23 23:52:45.116367 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:52:45.116377 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:52:45.116387 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:52:45.116398 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:52:45.116411 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:52:45.116421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:52:45.116431 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:52:45.116443 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:52:45.116454 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:52:45.116464 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:52:45.116475 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:52:45.116485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:45.116496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:45.116507 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:52:45.116517 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:52:45.116529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:52:45.116540 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:45.116550 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:52:45.116560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:45.116571 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:52:45.116581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:45.116592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:45.116602 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:45.116614 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:45.116625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:52:45.116636 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:52:45.116647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:45.116657 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:45.116668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:45.116678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:45.116689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:45.116699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:52:45.116711 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:52:45.116721 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:52:45.116732 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:52:45.116747 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:52:45.116779 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:52:45.116795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:52:45.116806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:52:45.116816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:45.116827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:45.116838 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:52:45.116849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:45.116859 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:52:45.116870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:45.116880 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:52:45.116893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:45.116905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:52:45.116916 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:52:45.116928 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:52:45.116938 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:45.116948 kernel: fuse: init (API version 7.39) Jan 23 23:52:45.116959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:45.116970 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:52:45.116982 kernel: loop: module loaded Jan 23 23:52:45.116992 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:52:45.117028 systemd-journald[1165]: Collecting audit messages is disabled. Jan 23 23:52:45.117058 systemd-journald[1165]: Journal started Jan 23 23:52:45.117080 systemd-journald[1165]: Runtime Journal (/run/log/journal/202fd767e98f4f8a8d2105ac126d994e) is 8.0M, max 76.6M, 68.6M free. Jan 23 23:52:45.121775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:45.136870 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:45.136933 kernel: ACPI: bus type drm_connector registered Jan 23 23:52:45.138496 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:52:45.140493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:52:45.143009 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:52:45.143654 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:52:45.144506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:52:45.148079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:52:45.150158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:45.153455 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:52:45.153639 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:52:45.155764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:45.155942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:45.158692 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:52:45.158872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:52:45.159956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:45.160113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:45.161121 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:52:45.161266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:52:45.162426 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:45.162601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:45.165988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:45.168575 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:52:45.169924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:52:45.180688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:52:45.184285 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:52:45.190889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:52:45.193873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:52:45.196847 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:52:45.199048 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:52:45.210011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:52:45.212183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:52:45.214061 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:52:45.217445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:52:45.225079 systemd-journald[1165]: Time spent on flushing to /var/log/journal/202fd767e98f4f8a8d2105ac126d994e is 53.219ms for 1113 entries. Jan 23 23:52:45.225079 systemd-journald[1165]: System Journal (/var/log/journal/202fd767e98f4f8a8d2105ac126d994e) is 8.0M, max 584.8M, 576.8M free. Jan 23 23:52:45.288973 systemd-journald[1165]: Received client request to flush runtime journal. Jan 23 23:52:45.226928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:45.234678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:45.237840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:52:45.238614 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:52:45.261934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:52:45.265008 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:52:45.271203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:45.287195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:45.296084 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:52:45.301111 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:52:45.303029 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 23 23:52:45.303050 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 23 23:52:45.310392 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:45.316954 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:52:45.330249 udevadm[1228]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:52:45.354593 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:52:45.367099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:45.385260 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 23:52:45.385601 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 23:52:45.389852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:45.741748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:52:45.747913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:45.771971 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Jan 23 23:52:45.794551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:45.806197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:52:45.831116 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:52:45.910922 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:52:45.916423 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 23 23:52:45.950485 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1264) Jan 23 23:52:46.017931 systemd-networkd[1249]: lo: Link UP Jan 23 23:52:46.018388 systemd-networkd[1249]: lo: Gained carrier Jan 23 23:52:46.020374 systemd-networkd[1249]: Enumeration completed Jan 23 23:52:46.020878 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:52:46.022746 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:46.023492 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:46.024809 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:46.024878 systemd-networkd[1249]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:46.025498 systemd-networkd[1249]: eth0: Link UP Jan 23 23:52:46.025590 systemd-networkd[1249]: eth0: Gained carrier Jan 23 23:52:46.025641 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:46.031041 systemd-networkd[1249]: eth1: Link UP Jan 23 23:52:46.031190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:52:46.032180 systemd-networkd[1249]: eth1: Gained carrier Jan 23 23:52:46.032375 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:46.055779 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:52:46.056348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 23:52:46.068837 systemd-networkd[1249]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 23:52:46.073992 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 23:52:46.074313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:46.080945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:46.085554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:46.088334 systemd-networkd[1249]: eth0: DHCPv4 address 49.13.80.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 23:52:46.088676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:46.092835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:52:46.092951 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:52:46.093350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:46.093606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:46.103876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:46.104064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:46.108632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:52:46.112722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:46.115030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:46.117531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:52:46.122084 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 23:52:46.122143 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 23:52:46.122156 kernel: [drm] features: -context_init Jan 23 23:52:46.128195 kernel: [drm] number of scanouts: 1 Jan 23 23:52:46.128275 kernel: [drm] number of cap sets: 0 Jan 23 23:52:46.131834 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 23 23:52:46.136806 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 23:52:46.138487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:46.144837 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 23:52:46.150844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:46.151125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:46.154336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:46.224380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:46.284665 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:52:46.293013 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:52:46.306106 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:52:46.330511 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:52:46.333065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:46.338997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:52:46.344616 lvm[1317]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:52:46.371342 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:52:46.373851 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:52:46.375920 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:52:46.376018 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:52:46.376678 systemd[1]: Reached target machines.target - Containers. Jan 23 23:52:46.378418 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:52:46.383971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:52:46.388319 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:52:46.390969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:46.394045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:52:46.397911 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:52:46.402682 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:52:46.406626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:52:46.422518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:52:46.433775 kernel: loop0: detected capacity change from 0 to 207008 Jan 23 23:52:46.439007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:52:46.440479 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:52:46.452930 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:52:46.478833 kernel: loop1: detected capacity change from 0 to 114432 Jan 23 23:52:46.519887 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:52:46.553794 kernel: loop3: detected capacity change from 0 to 8 Jan 23 23:52:46.577985 kernel: loop4: detected capacity change from 0 to 207008 Jan 23 23:52:46.592428 kernel: loop5: detected capacity change from 0 to 114432 Jan 23 23:52:46.601819 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:52:46.612049 kernel: loop7: detected capacity change from 0 to 8 Jan 23 23:52:46.612250 (sd-merge)[1338]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 23:52:46.612763 (sd-merge)[1338]: Merged extensions into '/usr'. Jan 23 23:52:46.617624 systemd[1]: Reloading requested from client PID 1325 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:52:46.617641 systemd[1]: Reloading... Jan 23 23:52:46.701790 zram_generator::config[1362]: No configuration found. Jan 23 23:52:46.803008 ldconfig[1321]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:52:46.831197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:46.890751 systemd[1]: Reloading finished in 272 ms. Jan 23 23:52:46.909699 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:52:46.914093 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:52:46.919946 systemd[1]: Starting ensure-sysext.service... Jan 23 23:52:46.925997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:46.930978 systemd[1]: Reloading requested from client PID 1410 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:52:46.931134 systemd[1]: Reloading... Jan 23 23:52:46.949988 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:52:46.950241 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:52:46.950890 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:52:46.951104 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 23 23:52:46.951148 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 23 23:52:46.955135 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:52:46.955147 systemd-tmpfiles[1411]: Skipping /boot Jan 23 23:52:46.964481 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:52:46.964495 systemd-tmpfiles[1411]: Skipping /boot Jan 23 23:52:47.011783 zram_generator::config[1443]: No configuration found. Jan 23 23:52:47.048886 systemd-networkd[1249]: eth1: Gained IPv6LL Jan 23 23:52:47.104500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:47.166930 systemd[1]: Reloading finished in 235 ms. Jan 23 23:52:47.183947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:52:47.185235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:47.216115 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:52:47.221120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:52:47.234037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:52:47.239555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:47.244005 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:52:47.246893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:47.250517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:47.266031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:47.272994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:47.273913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:47.276932 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:52:47.284365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:47.284533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:47.293452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:47.293748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:47.300840 augenrules[1512]: No rules Jan 23 23:52:47.302260 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:47.302616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:47.304069 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:52:47.317378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:52:47.326504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:47.335102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:47.339993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:52:47.343149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:47.355019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:47.355714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:47.360986 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:52:47.362717 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:52:47.366428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:47.366573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:47.377175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:52:47.377371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:52:47.380472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:47.380649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:47.382347 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:47.384018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:47.390488 systemd[1]: Finished ensure-sysext.service. Jan 23 23:52:47.396463 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:52:47.400915 systemd-resolved[1497]: Positive Trust Anchors: Jan 23 23:52:47.401240 systemd-resolved[1497]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:47.401359 systemd-resolved[1497]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:47.402856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:52:47.402949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:52:47.406940 systemd-resolved[1497]: Using system hostname 'ci-4081-3-6-n-417febb2dd'. Jan 23 23:52:47.409066 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 23:52:47.410826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:52:47.411017 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:47.411867 systemd[1]: Reached target network.target - Network. Jan 23 23:52:47.412467 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:52:47.413138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:47.466632 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 23:52:47.469613 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:52:47.471334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:52:47.472132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:52:47.472882 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:52:47.473607 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:52:47.473639 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:47.474256 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:52:47.475024 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:52:47.475749 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:52:47.476466 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:47.477988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:52:47.480173 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:52:47.482438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:52:47.489505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:52:47.490827 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:47.492038 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:52:47.493751 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:52:47.493827 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:52:47.493850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:52:47.499918 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:52:47.503908 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:52:47.512916 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:52:47.515885 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:52:47.516256 systemd-timesyncd[1548]: Contacted time server 202.61.195.221:123 (0.flatcar.pool.ntp.org). Jan 23 23:52:47.516939 systemd-timesyncd[1548]: Initial clock synchronization to Fri 2026-01-23 23:52:47.607483 UTC. Jan 23 23:52:47.530956 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:52:47.531932 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:52:47.536894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:47.545048 jq[1556]: false Jan 23 23:52:47.546962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:52:47.549050 coreos-metadata[1553]: Jan 23 23:52:47.546 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 23:52:47.552226 coreos-metadata[1553]: Jan 23 23:52:47.552 INFO Fetch successful Jan 23 23:52:47.552226 coreos-metadata[1553]: Jan 23 23:52:47.552 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 23:52:47.553914 coreos-metadata[1553]: Jan 23 23:52:47.552 INFO Fetch successful Jan 23 23:52:47.553945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:52:47.557792 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:52:47.561340 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 23:52:47.569629 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:52:47.572708 dbus-daemon[1555]: [system] SELinux support is enabled Jan 23 23:52:47.577215 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:52:47.595011 extend-filesystems[1559]: Found loop4 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found loop5 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found loop6 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found loop7 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda1 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda2 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda3 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found usr Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda4 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda6 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda7 Jan 23 23:52:47.595011 extend-filesystems[1559]: Found sda9 Jan 23 23:52:47.595011 extend-filesystems[1559]: Checking size of /dev/sda9 Jan 23 23:52:47.598906 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:52:47.600313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:52:47.607970 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:52:47.613323 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:52:47.619418 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:52:47.639150 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:52:47.639454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:52:47.648054 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:52:47.648885 jq[1587]: true Jan 23 23:52:47.648267 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:52:47.671959 extend-filesystems[1559]: Resized partition /dev/sda9 Jan 23 23:52:47.670407 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:52:47.689783 update_engine[1583]: I20260123 23:52:47.689232 1583 main.cc:92] Flatcar Update Engine starting Jan 23 23:52:47.690385 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:52:47.690668 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:52:47.701567 extend-filesystems[1607]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:52:47.696643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:52:47.696701 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:52:47.717893 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 23:52:47.717932 update_engine[1583]: I20260123 23:52:47.711042 1583 update_check_scheduler.cc:74] Next update check in 11m19s Jan 23 23:52:47.717974 jq[1602]: true Jan 23 23:52:47.698203 (ntainerd)[1608]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:52:47.698725 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:52:47.698749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:52:47.710209 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:52:47.711966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:52:47.715603 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:52:47.729743 tar[1596]: linux-arm64/LICENSE Jan 23 23:52:47.729743 tar[1596]: linux-arm64/helm Jan 23 23:52:47.752362 systemd-networkd[1249]: eth0: Gained IPv6LL Jan 23 23:52:47.803772 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1259) Jan 23 23:52:47.826355 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:52:47.828104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:52:47.863532 systemd-logind[1575]: New seat seat0. Jan 23 23:52:47.866643 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:52:47.867437 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 23:52:47.868649 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:52:47.895821 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 23:52:47.913142 extend-filesystems[1607]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 23:52:47.913142 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 23:52:47.913142 extend-filesystems[1607]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 23:52:47.921100 extend-filesystems[1559]: Resized filesystem in /dev/sda9 Jan 23 23:52:47.921100 extend-filesystems[1559]: Found sr0 Jan 23 23:52:47.916190 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:52:47.927170 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:52:47.916469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:52:47.926967 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:52:47.946204 systemd[1]: Starting sshkeys.service... Jan 23 23:52:47.976088 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:52:47.981797 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:52:48.053408 coreos-metadata[1665]: Jan 23 23:52:48.053 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 23:52:48.057738 coreos-metadata[1665]: Jan 23 23:52:48.057 INFO Fetch successful Jan 23 23:52:48.059072 unknown[1665]: wrote ssh authorized keys file for user: core Jan 23 23:52:48.107426 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:52:48.111291 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:52:48.112993 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:52:48.121415 systemd[1]: Finished sshkeys.service. Jan 23 23:52:48.129403 containerd[1608]: time="2026-01-23T23:52:48.129238256Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:52:48.170688 containerd[1608]: time="2026-01-23T23:52:48.169642099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.175082 containerd[1608]: time="2026-01-23T23:52:48.174825147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:48.175082 containerd[1608]: time="2026-01-23T23:52:48.174874787Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:52:48.175082 containerd[1608]: time="2026-01-23T23:52:48.174894045Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:52:48.175082 containerd[1608]: time="2026-01-23T23:52:48.175074482Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:52:48.175082 containerd[1608]: time="2026-01-23T23:52:48.175095439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.175278 containerd[1608]: time="2026-01-23T23:52:48.175181612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:48.175278 containerd[1608]: time="2026-01-23T23:52:48.175196743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.176761 containerd[1608]: time="2026-01-23T23:52:48.176489420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:48.176761 containerd[1608]: time="2026-01-23T23:52:48.176523687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.176761 containerd[1608]: time="2026-01-23T23:52:48.176541164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:48.176761 containerd[1608]: time="2026-01-23T23:52:48.176551238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.177568 containerd[1608]: time="2026-01-23T23:52:48.177363652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.177645 containerd[1608]: time="2026-01-23T23:52:48.177599313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:48.179586 containerd[1608]: time="2026-01-23T23:52:48.179482255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:48.179586 containerd[1608]: time="2026-01-23T23:52:48.179508876Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:52:48.179657 containerd[1608]: time="2026-01-23T23:52:48.179607671Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:52:48.179657 containerd[1608]: time="2026-01-23T23:52:48.179649504Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:52:48.186893 containerd[1608]: time="2026-01-23T23:52:48.186616415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:52:48.186893 containerd[1608]: time="2026-01-23T23:52:48.186681429Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:52:48.186893 containerd[1608]: time="2026-01-23T23:52:48.186698138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:52:48.186893 containerd[1608]: time="2026-01-23T23:52:48.186713997Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:52:48.186893 containerd[1608]: time="2026-01-23T23:52:48.186779011Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:52:48.187054 containerd[1608]: time="2026-01-23T23:52:48.186935377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188180234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188360347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188417149Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188439724Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188459103Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188478805Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188496161Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188515135Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188530954Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188548755Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188565868Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188582658Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188607417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.190660 containerd[1608]: time="2026-01-23T23:52:48.188626877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188644193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188666201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188682465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188700549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188716894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188733724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188751767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188789352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188806020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188829121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188852505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188873704Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188900891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188917438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191054 containerd[1608]: time="2026-01-23T23:52:48.188933055Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189054546Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189077526Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189089906Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189107869Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189122473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189138252Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189151805Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:52:48.191332 containerd[1608]: time="2026-01-23T23:52:48.189162445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:52:48.191523 containerd[1608]: time="2026-01-23T23:52:48.189554876Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:52:48.191523 containerd[1608]: time="2026-01-23T23:52:48.189629114Z" level=info msg="Connect containerd service" Jan 23 23:52:48.191523 containerd[1608]: time="2026-01-23T23:52:48.189662936Z" level=info msg="using legacy CRI server" Jan 23 23:52:48.191523 containerd[1608]: time="2026-01-23T23:52:48.189669814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:52:48.196793 containerd[1608]: time="2026-01-23T23:52:48.196740011Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:52:48.199143 containerd[1608]: time="2026-01-23T23:52:48.199108072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202433462Z" level=info msg="Start subscribing containerd event" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202489252Z" level=info msg="Start recovering state" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202596342Z" level=info msg="Start event monitor" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202612484Z" level=info msg="Start snapshots syncer" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202626846Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:52:48.202907 containerd[1608]: time="2026-01-23T23:52:48.202635382Z" level=info msg="Start streaming server" Jan 23 23:52:48.204855 containerd[1608]: time="2026-01-23T23:52:48.204821792Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:52:48.204906 containerd[1608]: time="2026-01-23T23:52:48.204882639Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:52:48.206019 containerd[1608]: time="2026-01-23T23:52:48.204934787Z" level=info msg="containerd successfully booted in 0.077361s" Jan 23 23:52:48.205062 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:52:48.607497 tar[1596]: linux-arm64/README.md Jan 23 23:52:48.624559 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:52:48.804541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:48.813203 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:52:49.016247 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:52:49.048753 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:52:49.064283 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:52:49.075998 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:52:49.076345 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:52:49.085096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:52:49.098138 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:52:49.107511 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:52:49.116134 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:52:49.117646 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:52:49.118370 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:52:49.121625 systemd[1]: Startup finished in 6.526s (kernel) + 4.731s (userspace) = 11.257s. Jan 23 23:52:49.322182 kubelet[1696]: E0123 23:52:49.322046 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:52:49.328007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:52:49.328194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:52:59.578673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:52:59.585963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:59.716172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:59.716275 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:52:59.766396 kubelet[1740]: E0123 23:52:59.766340 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:52:59.769748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:52:59.769906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:10.020940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:53:10.029106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:10.150982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:10.163477 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:10.210778 kubelet[1760]: E0123 23:53:10.210699 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:10.217101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:10.217489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:20.468015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:53:20.482172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:20.613975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:20.624488 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:20.670354 kubelet[1779]: E0123 23:53:20.670277 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:20.676029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:20.676196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:23.362800 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:53:23.374261 systemd[1]: Started sshd@0-49.13.80.198:22-20.161.92.111:34718.service - OpenSSH per-connection server daemon (20.161.92.111:34718). Jan 23 23:53:23.977663 sshd[1787]: Accepted publickey for core from 20.161.92.111 port 34718 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:23.980766 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:23.990399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:53:24.001324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:53:24.005681 systemd-logind[1575]: New session 1 of user core. Jan 23 23:53:24.018662 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:53:24.026137 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:53:24.030716 (systemd)[1793]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:53:24.147797 systemd[1793]: Queued start job for default target default.target. Jan 23 23:53:24.148558 systemd[1793]: Created slice app.slice - User Application Slice. Jan 23 23:53:24.148720 systemd[1793]: Reached target paths.target - Paths. Jan 23 23:53:24.148823 systemd[1793]: Reached target timers.target - Timers. Jan 23 23:53:24.154920 systemd[1793]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:53:24.165900 systemd[1793]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:53:24.165966 systemd[1793]: Reached target sockets.target - Sockets. Jan 23 23:53:24.165979 systemd[1793]: Reached target basic.target - Basic System. Jan 23 23:53:24.166031 systemd[1793]: Reached target default.target - Main User Target. Jan 23 23:53:24.166057 systemd[1793]: Startup finished in 129ms. Jan 23 23:53:24.166518 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:53:24.171052 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:53:24.606111 systemd[1]: Started sshd@1-49.13.80.198:22-20.161.92.111:34722.service - OpenSSH per-connection server daemon (20.161.92.111:34722). Jan 23 23:53:25.221442 sshd[1805]: Accepted publickey for core from 20.161.92.111 port 34722 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:25.223650 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:25.229674 systemd-logind[1575]: New session 2 of user core. Jan 23 23:53:25.235204 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:53:25.661119 sshd[1805]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:25.665112 systemd[1]: sshd@1-49.13.80.198:22-20.161.92.111:34722.service: Deactivated successfully. Jan 23 23:53:25.669223 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:53:25.669312 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:53:25.671362 systemd-logind[1575]: Removed session 2. Jan 23 23:53:25.763470 systemd[1]: Started sshd@2-49.13.80.198:22-20.161.92.111:34736.service - OpenSSH per-connection server daemon (20.161.92.111:34736). Jan 23 23:53:26.346255 sshd[1813]: Accepted publickey for core from 20.161.92.111 port 34736 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:26.348253 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:26.354491 systemd-logind[1575]: New session 3 of user core. Jan 23 23:53:26.361160 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:53:26.762227 sshd[1813]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:26.768248 systemd[1]: sshd@2-49.13.80.198:22-20.161.92.111:34736.service: Deactivated successfully. Jan 23 23:53:26.772277 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:53:26.773143 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:53:26.774010 systemd-logind[1575]: Removed session 3. Jan 23 23:53:26.869155 systemd[1]: Started sshd@3-49.13.80.198:22-20.161.92.111:34744.service - OpenSSH per-connection server daemon (20.161.92.111:34744). Jan 23 23:53:27.470375 sshd[1821]: Accepted publickey for core from 20.161.92.111 port 34744 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:27.472581 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:27.477419 systemd-logind[1575]: New session 4 of user core. Jan 23 23:53:27.490937 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:53:27.903212 sshd[1821]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:27.909306 systemd[1]: sshd@3-49.13.80.198:22-20.161.92.111:34744.service: Deactivated successfully. Jan 23 23:53:27.913658 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:53:27.914356 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:53:27.915473 systemd-logind[1575]: Removed session 4. Jan 23 23:53:28.003241 systemd[1]: Started sshd@4-49.13.80.198:22-20.161.92.111:34748.service - OpenSSH per-connection server daemon (20.161.92.111:34748). Jan 23 23:53:28.585438 sshd[1829]: Accepted publickey for core from 20.161.92.111 port 34748 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:28.587851 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:28.593172 systemd-logind[1575]: New session 5 of user core. Jan 23 23:53:28.599248 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:53:28.919616 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:53:28.920105 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:28.937817 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:29.033250 sshd[1829]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:29.039960 systemd[1]: sshd@4-49.13.80.198:22-20.161.92.111:34748.service: Deactivated successfully. Jan 23 23:53:29.043443 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:53:29.044421 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:53:29.045310 systemd-logind[1575]: Removed session 5. Jan 23 23:53:29.138040 systemd[1]: Started sshd@5-49.13.80.198:22-20.161.92.111:34760.service - OpenSSH per-connection server daemon (20.161.92.111:34760). Jan 23 23:53:29.755691 sshd[1838]: Accepted publickey for core from 20.161.92.111 port 34760 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:29.757891 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:29.763270 systemd-logind[1575]: New session 6 of user core. Jan 23 23:53:29.769281 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:53:30.096198 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:53:30.096490 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:30.101308 sudo[1843]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:30.106594 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:53:30.107009 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:30.127165 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:30.128891 auditctl[1846]: No rules Jan 23 23:53:30.129345 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:53:30.129588 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:30.134139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:30.163048 augenrules[1865]: No rules Jan 23 23:53:30.165128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:30.167001 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:30.266466 sshd[1838]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:30.273029 systemd[1]: sshd@5-49.13.80.198:22-20.161.92.111:34760.service: Deactivated successfully. Jan 23 23:53:30.277074 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:53:30.278056 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:53:30.279002 systemd-logind[1575]: Removed session 6. Jan 23 23:53:30.367194 systemd[1]: Started sshd@6-49.13.80.198:22-20.161.92.111:34768.service - OpenSSH per-connection server daemon (20.161.92.111:34768). Jan 23 23:53:30.851323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:53:30.862211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:30.953362 sshd[1874]: Accepted publickey for core from 20.161.92.111 port 34768 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:53:30.956640 sshd[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:30.976807 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:53:30.977603 systemd-logind[1575]: New session 7 of user core. Jan 23 23:53:30.984948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:30.989709 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:31.028388 kubelet[1889]: E0123 23:53:31.028307 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:31.033036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:31.033200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:31.277916 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:53:31.278218 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:31.595532 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:53:31.596376 (dockerd)[1913]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:53:31.850384 dockerd[1913]: time="2026-01-23T23:53:31.850224943Z" level=info msg="Starting up" Jan 23 23:53:31.949126 dockerd[1913]: time="2026-01-23T23:53:31.949082960Z" level=info msg="Loading containers: start." Jan 23 23:53:32.049811 kernel: Initializing XFRM netlink socket Jan 23 23:53:32.134732 systemd-networkd[1249]: docker0: Link UP Jan 23 23:53:32.155240 dockerd[1913]: time="2026-01-23T23:53:32.154988996Z" level=info msg="Loading containers: done." Jan 23 23:53:32.179419 dockerd[1913]: time="2026-01-23T23:53:32.179297084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:53:32.179419 dockerd[1913]: time="2026-01-23T23:53:32.179422413Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:53:32.180049 dockerd[1913]: time="2026-01-23T23:53:32.179589145Z" level=info msg="Daemon has completed initialization" Jan 23 23:53:32.218857 dockerd[1913]: time="2026-01-23T23:53:32.217961879Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:53:32.219256 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:53:32.865215 update_engine[1583]: I20260123 23:53:32.865059 1583 update_attempter.cc:509] Updating boot flags... Jan 23 23:53:32.930973 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2058) Jan 23 23:53:32.979783 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2058) Jan 23 23:53:33.043780 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2058) Jan 23 23:53:33.268468 containerd[1608]: time="2026-01-23T23:53:33.268348687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:53:33.982640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665946940.mount: Deactivated successfully. Jan 23 23:53:34.961787 containerd[1608]: time="2026-01-23T23:53:34.960381471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:34.964047 containerd[1608]: time="2026-01-23T23:53:34.963939070Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 23 23:53:34.965457 containerd[1608]: time="2026-01-23T23:53:34.965071666Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:34.969024 containerd[1608]: time="2026-01-23T23:53:34.968991210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:34.970394 containerd[1608]: time="2026-01-23T23:53:34.970346701Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.701949331s" Jan 23 23:53:34.970394 containerd[1608]: time="2026-01-23T23:53:34.970392344Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:53:34.971058 containerd[1608]: time="2026-01-23T23:53:34.971031227Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:53:36.270850 containerd[1608]: time="2026-01-23T23:53:36.270116585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:36.272791 containerd[1608]: time="2026-01-23T23:53:36.272495370Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 23 23:53:36.274361 containerd[1608]: time="2026-01-23T23:53:36.274242197Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:36.280804 containerd[1608]: time="2026-01-23T23:53:36.279850139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:36.282686 containerd[1608]: time="2026-01-23T23:53:36.282484340Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.31141631s" Jan 23 23:53:36.282686 containerd[1608]: time="2026-01-23T23:53:36.282532423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:53:36.284221 containerd[1608]: time="2026-01-23T23:53:36.283974151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:53:37.439033 containerd[1608]: time="2026-01-23T23:53:37.438868358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:37.440702 containerd[1608]: time="2026-01-23T23:53:37.440558577Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 23 23:53:37.442541 containerd[1608]: time="2026-01-23T23:53:37.442444406Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:37.448159 containerd[1608]: time="2026-01-23T23:53:37.447853441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:37.448776 containerd[1608]: time="2026-01-23T23:53:37.448619126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.164596972s" Jan 23 23:53:37.448776 containerd[1608]: time="2026-01-23T23:53:37.448652328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:53:37.450404 containerd[1608]: time="2026-01-23T23:53:37.450170056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:53:38.456680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422158762.mount: Deactivated successfully. Jan 23 23:53:38.815038 containerd[1608]: time="2026-01-23T23:53:38.814984609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:38.816685 containerd[1608]: time="2026-01-23T23:53:38.816569617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 23 23:53:38.816685 containerd[1608]: time="2026-01-23T23:53:38.816623700Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:38.820525 containerd[1608]: time="2026-01-23T23:53:38.819225005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:38.820525 containerd[1608]: time="2026-01-23T23:53:38.820145456Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.369936958s" Jan 23 23:53:38.820525 containerd[1608]: time="2026-01-23T23:53:38.820225261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:53:38.821114 containerd[1608]: time="2026-01-23T23:53:38.821018745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:53:39.457004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963516527.mount: Deactivated successfully. Jan 23 23:53:40.119425 containerd[1608]: time="2026-01-23T23:53:40.119346710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.121172 containerd[1608]: time="2026-01-23T23:53:40.120738741Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 23 23:53:40.123019 containerd[1608]: time="2026-01-23T23:53:40.122105210Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.126641 containerd[1608]: time="2026-01-23T23:53:40.126593438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.128541 containerd[1608]: time="2026-01-23T23:53:40.128491014Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.307431628s" Jan 23 23:53:40.128541 containerd[1608]: time="2026-01-23T23:53:40.128537657Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:53:40.129186 containerd[1608]: time="2026-01-23T23:53:40.129147087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:53:40.641434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732282586.mount: Deactivated successfully. Jan 23 23:53:40.648855 containerd[1608]: time="2026-01-23T23:53:40.648800230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.650100 containerd[1608]: time="2026-01-23T23:53:40.650060974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 23:53:40.651065 containerd[1608]: time="2026-01-23T23:53:40.650743969Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.653558 containerd[1608]: time="2026-01-23T23:53:40.653521550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:40.654580 containerd[1608]: time="2026-01-23T23:53:40.654548762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.359313ms" Jan 23 23:53:40.654847 containerd[1608]: time="2026-01-23T23:53:40.654726931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:53:40.655843 containerd[1608]: time="2026-01-23T23:53:40.655805306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:53:41.050630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 23:53:41.058067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:41.213052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:41.216695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262566314.mount: Deactivated successfully. Jan 23 23:53:41.217096 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:41.272635 kubelet[2216]: E0123 23:53:41.272237 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:41.279026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:41.279224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:42.914889 containerd[1608]: time="2026-01-23T23:53:42.914814506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:42.917361 containerd[1608]: time="2026-01-23T23:53:42.917285981Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 23 23:53:42.918798 containerd[1608]: time="2026-01-23T23:53:42.917739122Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:42.923478 containerd[1608]: time="2026-01-23T23:53:42.923431427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:42.926105 containerd[1608]: time="2026-01-23T23:53:42.926057870Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.270123277s" Jan 23 23:53:42.926193 containerd[1608]: time="2026-01-23T23:53:42.926103152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:53:50.380818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:50.388125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:50.424262 systemd[1]: Reloading requested from client PID 2303 ('systemctl') (unit session-7.scope)... Jan 23 23:53:50.424275 systemd[1]: Reloading... Jan 23 23:53:50.541787 zram_generator::config[2350]: No configuration found. Jan 23 23:53:50.641365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:50.709335 systemd[1]: Reloading finished in 284 ms. Jan 23 23:53:50.765648 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:53:50.765725 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:53:50.766380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:50.770846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:50.914241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:50.923176 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:53:50.967781 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:50.967781 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:53:50.967781 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:50.967781 kubelet[2404]: I0123 23:53:50.966355 2404 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:53:51.316516 kubelet[2404]: I0123 23:53:51.316465 2404 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:53:51.316516 kubelet[2404]: I0123 23:53:51.316500 2404 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:53:51.316823 kubelet[2404]: I0123 23:53:51.316794 2404 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:53:51.346664 kubelet[2404]: E0123 23:53:51.345078 2404 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.80.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:51.346900 kubelet[2404]: I0123 23:53:51.346740 2404 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:53:51.356786 kubelet[2404]: E0123 23:53:51.356723 2404 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:53:51.356970 kubelet[2404]: I0123 23:53:51.356957 2404 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:53:51.359660 kubelet[2404]: I0123 23:53:51.359632 2404 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:53:51.361199 kubelet[2404]: I0123 23:53:51.361145 2404 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:53:51.361525 kubelet[2404]: I0123 23:53:51.361314 2404 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-417febb2dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:53:51.361739 kubelet[2404]: I0123 23:53:51.361724 2404 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:53:51.361818 kubelet[2404]: I0123 23:53:51.361808 2404 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:53:51.362059 kubelet[2404]: I0123 23:53:51.362045 2404 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:51.365813 kubelet[2404]: I0123 23:53:51.365788 2404 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:53:51.365923 kubelet[2404]: I0123 23:53:51.365912 2404 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:53:51.365989 kubelet[2404]: I0123 23:53:51.365981 2404 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:53:51.366045 kubelet[2404]: I0123 23:53:51.366037 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:53:51.372346 kubelet[2404]: W0123 23:53:51.372256 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.80.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-417febb2dd&limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:51.372484 kubelet[2404]: E0123 23:53:51.372353 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.80.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-417febb2dd&limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:51.372908 kubelet[2404]: W0123 23:53:51.372864 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.80.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:51.372959 kubelet[2404]: E0123 23:53:51.372912 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.80.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:51.373425 kubelet[2404]: I0123 23:53:51.373392 2404 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:53:51.376733 kubelet[2404]: I0123 23:53:51.374670 2404 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:53:51.376733 kubelet[2404]: W0123 23:53:51.374885 2404 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:53:51.377245 kubelet[2404]: I0123 23:53:51.377221 2404 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:53:51.377282 kubelet[2404]: I0123 23:53:51.377262 2404 server.go:1287] "Started kubelet" Jan 23 23:53:51.378261 kubelet[2404]: I0123 23:53:51.378226 2404 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:53:51.379175 kubelet[2404]: I0123 23:53:51.379157 2404 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:53:51.383002 kubelet[2404]: I0123 23:53:51.382926 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:53:51.383270 kubelet[2404]: I0123 23:53:51.383247 2404 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:53:51.383848 kubelet[2404]: E0123 23:53:51.383453 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.80.198:6443/api/v1/namespaces/default/events\": dial tcp 49.13.80.198:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-417febb2dd.188d815528721567 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-417febb2dd,UID:ci-4081-3-6-n-417febb2dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-417febb2dd,},FirstTimestamp:2026-01-23 23:53:51.377241447 +0000 UTC m=+0.450888939,LastTimestamp:2026-01-23 23:53:51.377241447 +0000 UTC m=+0.450888939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-417febb2dd,}" Jan 23 23:53:51.385815 kubelet[2404]: I0123 23:53:51.385795 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:53:51.387057 kubelet[2404]: I0123 23:53:51.387027 2404 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:53:51.390428 kubelet[2404]: I0123 23:53:51.390404 2404 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:53:51.391191 kubelet[2404]: E0123 23:53:51.390749 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:51.392406 kubelet[2404]: I0123 23:53:51.392254 2404 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:53:51.392406 kubelet[2404]: I0123 23:53:51.392353 2404 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:53:51.393175 kubelet[2404]: W0123 23:53:51.393122 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.80.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:51.393239 kubelet[2404]: E0123 23:53:51.393178 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.80.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:51.393276 kubelet[2404]: E0123 23:53:51.393242 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.80.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-417febb2dd?timeout=10s\": dial tcp 49.13.80.198:6443: connect: connection refused" interval="200ms" Jan 23 23:53:51.396321 kubelet[2404]: E0123 23:53:51.395882 2404 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:53:51.396321 kubelet[2404]: I0123 23:53:51.396097 2404 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:53:51.396321 kubelet[2404]: I0123 23:53:51.396193 2404 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:53:51.398545 kubelet[2404]: I0123 23:53:51.398526 2404 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:53:51.426438 kubelet[2404]: I0123 23:53:51.426394 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:53:51.427696 kubelet[2404]: I0123 23:53:51.427671 2404 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:53:51.427855 kubelet[2404]: I0123 23:53:51.427842 2404 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:53:51.427922 kubelet[2404]: I0123 23:53:51.427913 2404 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:53:51.428388 kubelet[2404]: I0123 23:53:51.427962 2404 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:53:51.428933 kubelet[2404]: E0123 23:53:51.428875 2404 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:53:51.430173 kubelet[2404]: W0123 23:53:51.430134 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.80.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:51.430711 kubelet[2404]: E0123 23:53:51.430264 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.80.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:51.431275 kubelet[2404]: I0123 23:53:51.431247 2404 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:53:51.431275 kubelet[2404]: I0123 23:53:51.431266 2404 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:53:51.431354 kubelet[2404]: I0123 23:53:51.431282 2404 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:51.434609 kubelet[2404]: I0123 23:53:51.434575 2404 policy_none.go:49] "None policy: Start" Jan 23 23:53:51.434609 kubelet[2404]: I0123 23:53:51.434604 2404 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:53:51.434609 kubelet[2404]: I0123 23:53:51.434616 2404 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:53:51.444127 kubelet[2404]: I0123 23:53:51.444086 2404 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:53:51.445523 kubelet[2404]: I0123 23:53:51.444321 2404 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:53:51.445523 kubelet[2404]: I0123 23:53:51.444333 2404 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:53:51.445523 kubelet[2404]: I0123 23:53:51.445296 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:53:51.447378 kubelet[2404]: E0123 23:53:51.447359 2404 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:53:51.447521 kubelet[2404]: E0123 23:53:51.447507 2404 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:51.539273 kubelet[2404]: E0123 23:53:51.539240 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.543025 kubelet[2404]: E0123 23:53:51.541842 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.543025 kubelet[2404]: E0123 23:53:51.542654 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.545860 kubelet[2404]: I0123 23:53:51.545835 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.546241 kubelet[2404]: E0123 23:53:51.546213 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.80.198:6443/api/v1/nodes\": dial tcp 49.13.80.198:6443: connect: connection refused" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.595825 kubelet[2404]: E0123 23:53:51.594280 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.80.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-417febb2dd?timeout=10s\": dial tcp 49.13.80.198:6443: connect: connection refused" interval="400ms" Jan 23 23:53:51.694341 kubelet[2404]: I0123 23:53:51.693883 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694341 kubelet[2404]: I0123 23:53:51.693961 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694341 kubelet[2404]: I0123 23:53:51.694003 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694341 kubelet[2404]: I0123 23:53:51.694043 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694341 kubelet[2404]: I0123 23:53:51.694071 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694697 kubelet[2404]: I0123 23:53:51.694099 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694697 kubelet[2404]: I0123 23:53:51.694131 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694697 kubelet[2404]: I0123 23:53:51.694159 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11fc5bad864fcef78395fff264cfaa87-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-417febb2dd\" (UID: \"11fc5bad864fcef78395fff264cfaa87\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.694697 kubelet[2404]: I0123 23:53:51.694191 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.749724 kubelet[2404]: I0123 23:53:51.749277 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.749941 kubelet[2404]: E0123 23:53:51.749816 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.80.198:6443/api/v1/nodes\": dial tcp 49.13.80.198:6443: connect: connection refused" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:51.841923 containerd[1608]: time="2026-01-23T23:53:51.841796230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-417febb2dd,Uid:6aa46b04a2b2c2073da05c9882626e05,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:51.842973 containerd[1608]: time="2026-01-23T23:53:51.842874226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-417febb2dd,Uid:8ded843612ec5d4f5a54fe9ca2678fa9,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:51.845051 containerd[1608]: time="2026-01-23T23:53:51.844904293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-417febb2dd,Uid:11fc5bad864fcef78395fff264cfaa87,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:51.997027 kubelet[2404]: E0123 23:53:51.996876 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.80.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-417febb2dd?timeout=10s\": dial tcp 49.13.80.198:6443: connect: connection refused" interval="800ms" Jan 23 23:53:52.152653 kubelet[2404]: I0123 23:53:52.152122 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:52.152653 kubelet[2404]: E0123 23:53:52.152622 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.80.198:6443/api/v1/nodes\": dial tcp 49.13.80.198:6443: connect: connection refused" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:52.363149 kubelet[2404]: W0123 23:53:52.363116 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.80.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:52.364568 kubelet[2404]: E0123 23:53:52.364523 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.80.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:52.366056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529243400.mount: Deactivated successfully. Jan 23 23:53:52.373511 containerd[1608]: time="2026-01-23T23:53:52.373464151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.381780 containerd[1608]: time="2026-01-23T23:53:52.381162917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 23 23:53:52.381945 containerd[1608]: time="2026-01-23T23:53:52.381912661Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.382689 containerd[1608]: time="2026-01-23T23:53:52.382663165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.383314 containerd[1608]: time="2026-01-23T23:53:52.383217423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:53:52.385775 containerd[1608]: time="2026-01-23T23:53:52.384284857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:53:52.385775 containerd[1608]: time="2026-01-23T23:53:52.384420901Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.388298 containerd[1608]: time="2026-01-23T23:53:52.388252184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.389377 containerd[1608]: time="2026-01-23T23:53:52.389346779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.448665ms" Jan 23 23:53:52.391175 containerd[1608]: time="2026-01-23T23:53:52.391136916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.16298ms" Jan 23 23:53:52.391840 containerd[1608]: time="2026-01-23T23:53:52.391796697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.847829ms" Jan 23 23:53:52.393086 kubelet[2404]: W0123 23:53:52.393057 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.80.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:52.393158 kubelet[2404]: E0123 23:53:52.393102 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.80.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:52.523184 containerd[1608]: time="2026-01-23T23:53:52.523035970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:52.523184 containerd[1608]: time="2026-01-23T23:53:52.523093852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:52.523667 containerd[1608]: time="2026-01-23T23:53:52.523390062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.526184551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.527167382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.527207224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.527221424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.527290706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.526852852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.526899734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.526926935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.527456 containerd[1608]: time="2026-01-23T23:53:52.527015098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.608510 containerd[1608]: time="2026-01-23T23:53:52.608342896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-417febb2dd,Uid:6aa46b04a2b2c2073da05c9882626e05,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc274df3bafa5ada8cabda0d9cd88b778234dcb6f8d6054fb6617993b9704d18\"" Jan 23 23:53:52.614166 containerd[1608]: time="2026-01-23T23:53:52.614042959Z" level=info msg="CreateContainer within sandbox \"cc274df3bafa5ada8cabda0d9cd88b778234dcb6f8d6054fb6617993b9704d18\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:53:52.619732 containerd[1608]: time="2026-01-23T23:53:52.619680419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-417febb2dd,Uid:8ded843612ec5d4f5a54fe9ca2678fa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b257bd80acc2d3209ef27c25590bcf852a25ca906e65f5d0c1b1de113f275c82\"" Jan 23 23:53:52.622707 containerd[1608]: time="2026-01-23T23:53:52.622675154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-417febb2dd,Uid:11fc5bad864fcef78395fff264cfaa87,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3fc9fd49d3c8eec134823cfd2c86c79c9c18cfcf74bfc95aff9c5df3b784525\"" Jan 23 23:53:52.625877 containerd[1608]: time="2026-01-23T23:53:52.625522645Z" level=info msg="CreateContainer within sandbox \"b257bd80acc2d3209ef27c25590bcf852a25ca906e65f5d0c1b1de113f275c82\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:53:52.627186 containerd[1608]: time="2026-01-23T23:53:52.627115576Z" level=info msg="CreateContainer within sandbox \"d3fc9fd49d3c8eec134823cfd2c86c79c9c18cfcf74bfc95aff9c5df3b784525\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:53:52.638368 containerd[1608]: time="2026-01-23T23:53:52.638299974Z" level=info msg="CreateContainer within sandbox \"cc274df3bafa5ada8cabda0d9cd88b778234dcb6f8d6054fb6617993b9704d18\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5896cc344a78cf0cdda85a7960a7bdaf579d31f1c88f061cb209a99967c131a3\"" Jan 23 23:53:52.639233 containerd[1608]: time="2026-01-23T23:53:52.639205323Z" level=info msg="StartContainer for \"5896cc344a78cf0cdda85a7960a7bdaf579d31f1c88f061cb209a99967c131a3\"" Jan 23 23:53:52.647991 containerd[1608]: time="2026-01-23T23:53:52.647889360Z" level=info msg="CreateContainer within sandbox \"b257bd80acc2d3209ef27c25590bcf852a25ca906e65f5d0c1b1de113f275c82\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571\"" Jan 23 23:53:52.648879 containerd[1608]: time="2026-01-23T23:53:52.648772268Z" level=info msg="StartContainer for \"a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571\"" Jan 23 23:53:52.650490 containerd[1608]: time="2026-01-23T23:53:52.650422681Z" level=info msg="CreateContainer within sandbox \"d3fc9fd49d3c8eec134823cfd2c86c79c9c18cfcf74bfc95aff9c5df3b784525\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79\"" Jan 23 23:53:52.651042 containerd[1608]: time="2026-01-23T23:53:52.651023100Z" level=info msg="StartContainer for \"75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79\"" Jan 23 23:53:52.726643 kubelet[2404]: W0123 23:53:52.726561 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.80.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:52.726894 kubelet[2404]: E0123 23:53:52.726846 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.80.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:52.740821 containerd[1608]: time="2026-01-23T23:53:52.739554849Z" level=info msg="StartContainer for \"5896cc344a78cf0cdda85a7960a7bdaf579d31f1c88f061cb209a99967c131a3\" returns successfully" Jan 23 23:53:52.753251 containerd[1608]: time="2026-01-23T23:53:52.751797760Z" level=info msg="StartContainer for \"75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79\" returns successfully" Jan 23 23:53:52.768335 containerd[1608]: time="2026-01-23T23:53:52.768227405Z" level=info msg="StartContainer for \"a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571\" returns successfully" Jan 23 23:53:52.798444 kubelet[2404]: E0123 23:53:52.798294 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.80.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-417febb2dd?timeout=10s\": dial tcp 49.13.80.198:6443: connect: connection refused" interval="1.6s" Jan 23 23:53:52.823780 kubelet[2404]: W0123 23:53:52.822286 2404 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.80.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-417febb2dd&limit=500&resourceVersion=0": dial tcp 49.13.80.198:6443: connect: connection refused Jan 23 23:53:52.823780 kubelet[2404]: E0123 23:53:52.822361 2404 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.80.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-417febb2dd&limit=500&resourceVersion=0\": dial tcp 49.13.80.198:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:52.956803 kubelet[2404]: I0123 23:53:52.956192 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:53.441181 kubelet[2404]: E0123 23:53:53.441153 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:53.445486 kubelet[2404]: E0123 23:53:53.444536 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:53.450921 kubelet[2404]: E0123 23:53:53.450892 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:54.452385 kubelet[2404]: E0123 23:53:54.452347 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:54.452773 kubelet[2404]: E0123 23:53:54.452737 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.228029 kubelet[2404]: E0123 23:53:55.227976 2404 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.330164 kubelet[2404]: I0123 23:53:55.330119 2404 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.330164 kubelet[2404]: E0123 23:53:55.330164 2404 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-417febb2dd\": node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.354770 kubelet[2404]: E0123 23:53:55.353798 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.455262 kubelet[2404]: E0123 23:53:55.454803 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.455262 kubelet[2404]: E0123 23:53:55.455119 2404 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-417febb2dd\" not found" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.555147 kubelet[2404]: E0123 23:53:55.555104 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.655613 kubelet[2404]: E0123 23:53:55.655568 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.756746 kubelet[2404]: E0123 23:53:55.756698 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.857843 kubelet[2404]: E0123 23:53:55.857453 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-417febb2dd\" not found" Jan 23 23:53:55.892913 kubelet[2404]: I0123 23:53:55.892870 2404 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.902490 kubelet[2404]: E0123 23:53:55.902263 2404 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.902490 kubelet[2404]: I0123 23:53:55.902299 2404 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.905381 kubelet[2404]: E0123 23:53:55.905345 2404 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.905381 kubelet[2404]: I0123 23:53:55.905377 2404 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:55.907008 kubelet[2404]: E0123 23:53:55.906972 2404 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-417febb2dd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:56.376396 kubelet[2404]: I0123 23:53:56.376324 2404 apiserver.go:52] "Watching apiserver" Jan 23 23:53:56.392796 kubelet[2404]: I0123 23:53:56.392364 2404 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:53:56.502003 kubelet[2404]: I0123 23:53:56.501861 2404 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:57.375158 systemd[1]: Reloading requested from client PID 2671 ('systemctl') (unit session-7.scope)... Jan 23 23:53:57.375179 systemd[1]: Reloading... Jan 23 23:53:57.464974 zram_generator::config[2711]: No configuration found. Jan 23 23:53:57.568605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:57.653134 systemd[1]: Reloading finished in 277 ms. Jan 23 23:53:57.687862 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:57.712340 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:53:57.713036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:57.721887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:57.860967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:57.868372 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:53:57.916663 kubelet[2766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:57.918396 kubelet[2766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:53:57.918396 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:57.918396 kubelet[2766]: I0123 23:53:57.916879 2766 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:53:57.930804 kubelet[2766]: I0123 23:53:57.930770 2766 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:53:57.931094 kubelet[2766]: I0123 23:53:57.931080 2766 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:53:57.936319 kubelet[2766]: I0123 23:53:57.936293 2766 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:53:57.937801 kubelet[2766]: I0123 23:53:57.937772 2766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:53:57.941728 kubelet[2766]: I0123 23:53:57.941660 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:53:57.945113 kubelet[2766]: E0123 23:53:57.945081 2766 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:53:57.945208 kubelet[2766]: I0123 23:53:57.945118 2766 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:53:57.948782 kubelet[2766]: I0123 23:53:57.948243 2766 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:53:57.948782 kubelet[2766]: I0123 23:53:57.948714 2766 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:53:57.948960 kubelet[2766]: I0123 23:53:57.948742 2766 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-417febb2dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:53:57.949097 kubelet[2766]: I0123 23:53:57.948969 2766 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:53:57.949097 kubelet[2766]: I0123 23:53:57.948979 2766 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:53:57.949097 kubelet[2766]: I0123 23:53:57.949022 2766 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:57.949196 kubelet[2766]: I0123 23:53:57.949149 2766 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:53:57.949196 kubelet[2766]: I0123 23:53:57.949160 2766 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:53:57.949196 kubelet[2766]: I0123 23:53:57.949176 2766 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:53:57.949196 kubelet[2766]: I0123 23:53:57.949185 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:53:57.960523 kubelet[2766]: I0123 23:53:57.960494 2766 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:53:57.961208 kubelet[2766]: I0123 23:53:57.961188 2766 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:53:57.964193 kubelet[2766]: I0123 23:53:57.964170 2766 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:53:57.964894 kubelet[2766]: I0123 23:53:57.964867 2766 server.go:1287] "Started kubelet" Jan 23 23:53:57.973348 kubelet[2766]: I0123 23:53:57.973311 2766 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:53:57.973985 kubelet[2766]: I0123 23:53:57.973931 2766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:53:57.974223 kubelet[2766]: I0123 23:53:57.974201 2766 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:53:57.974797 kubelet[2766]: I0123 23:53:57.974782 2766 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:53:57.977368 kubelet[2766]: I0123 23:53:57.977351 2766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:53:57.991434 kubelet[2766]: I0123 23:53:57.991406 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:53:57.996398 kubelet[2766]: I0123 23:53:57.994568 2766 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:53:57.998911 kubelet[2766]: I0123 23:53:57.998883 2766 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:53:57.999041 kubelet[2766]: I0123 23:53:57.999026 2766 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:53:58.001119 kubelet[2766]: I0123 23:53:58.001085 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:53:58.003400 kubelet[2766]: I0123 23:53:58.002384 2766 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:53:58.003400 kubelet[2766]: I0123 23:53:58.002504 2766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:53:58.006026 kubelet[2766]: E0123 23:53:58.005135 2766 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:53:58.006412 kubelet[2766]: I0123 23:53:58.006391 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:53:58.006502 kubelet[2766]: I0123 23:53:58.006492 2766 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:53:58.006562 kubelet[2766]: I0123 23:53:58.006554 2766 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:53:58.006607 kubelet[2766]: I0123 23:53:58.006600 2766 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:53:58.006780 kubelet[2766]: E0123 23:53:58.006725 2766 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:53:58.010653 kubelet[2766]: I0123 23:53:58.010243 2766 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:53:58.074051 kubelet[2766]: I0123 23:53:58.074031 2766 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:53:58.074205 kubelet[2766]: I0123 23:53:58.074192 2766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:53:58.074274 kubelet[2766]: I0123 23:53:58.074267 2766 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:58.074480 kubelet[2766]: I0123 23:53:58.074466 2766 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:53:58.074782 kubelet[2766]: I0123 23:53:58.074532 2766 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:53:58.074782 kubelet[2766]: I0123 23:53:58.074555 2766 policy_none.go:49] "None policy: Start" Jan 23 23:53:58.074782 kubelet[2766]: I0123 23:53:58.074564 2766 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:53:58.074782 kubelet[2766]: I0123 23:53:58.074574 2766 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:53:58.074782 kubelet[2766]: I0123 23:53:58.074696 2766 state_mem.go:75] "Updated machine memory state" Jan 23 23:53:58.076080 kubelet[2766]: I0123 23:53:58.076061 2766 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:53:58.076312 kubelet[2766]: I0123 23:53:58.076295 2766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:53:58.076405 kubelet[2766]: I0123 23:53:58.076376 2766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:53:58.077556 kubelet[2766]: I0123 23:53:58.077540 2766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:53:58.081782 kubelet[2766]: E0123 23:53:58.080158 2766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:53:58.107353 kubelet[2766]: I0123 23:53:58.107307 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.108340 kubelet[2766]: I0123 23:53:58.107701 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.108562 kubelet[2766]: I0123 23:53:58.107901 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.118088 kubelet[2766]: E0123 23:53:58.118033 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.191533 kubelet[2766]: I0123 23:53:58.190723 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.201782 kubelet[2766]: I0123 23:53:58.200277 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.201782 kubelet[2766]: I0123 23:53:58.200355 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.201782 kubelet[2766]: I0123 23:53:58.200375 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.201782 kubelet[2766]: I0123 23:53:58.200392 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.201782 kubelet[2766]: I0123 23:53:58.200411 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.202108 kubelet[2766]: I0123 23:53:58.200425 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.202108 kubelet[2766]: I0123 23:53:58.200440 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ded843612ec5d4f5a54fe9ca2678fa9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-417febb2dd\" (UID: \"8ded843612ec5d4f5a54fe9ca2678fa9\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.202108 kubelet[2766]: I0123 23:53:58.200454 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11fc5bad864fcef78395fff264cfaa87-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-417febb2dd\" (UID: \"11fc5bad864fcef78395fff264cfaa87\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.202108 kubelet[2766]: I0123 23:53:58.200469 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6aa46b04a2b2c2073da05c9882626e05-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" (UID: \"6aa46b04a2b2c2073da05c9882626e05\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.206371 kubelet[2766]: I0123 23:53:58.206202 2766 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.206371 kubelet[2766]: I0123 23:53:58.206273 2766 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-417febb2dd" Jan 23 23:53:58.372238 sudo[2798]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:53:58.372518 sudo[2798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:53:58.863530 sudo[2798]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:58.952771 kubelet[2766]: I0123 23:53:58.950952 2766 apiserver.go:52] "Watching apiserver" Jan 23 23:53:58.999487 kubelet[2766]: I0123 23:53:58.999409 2766 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:53:59.047079 kubelet[2766]: I0123 23:53:59.047013 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:59.047697 kubelet[2766]: I0123 23:53:59.047655 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:59.059430 kubelet[2766]: E0123 23:53:59.059202 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-417febb2dd\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:59.059939 kubelet[2766]: E0123 23:53:59.059601 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-417febb2dd\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" Jan 23 23:53:59.079477 kubelet[2766]: I0123 23:53:59.078902 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-417febb2dd" podStartSLOduration=1.07888747 podStartE2EDuration="1.07888747s" podCreationTimestamp="2026-01-23 23:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:59.078726186 +0000 UTC m=+1.206264155" watchObservedRunningTime="2026-01-23 23:53:59.07888747 +0000 UTC m=+1.206425439" Jan 23 23:53:59.105328 kubelet[2766]: I0123 23:53:59.105269 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-417febb2dd" podStartSLOduration=1.105248998 podStartE2EDuration="1.105248998s" podCreationTimestamp="2026-01-23 23:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:59.089952159 +0000 UTC m=+1.217490128" watchObservedRunningTime="2026-01-23 23:53:59.105248998 +0000 UTC m=+1.232786967" Jan 23 23:53:59.106163 kubelet[2766]: I0123 23:53:59.105976 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-417febb2dd" podStartSLOduration=3.105963177 podStartE2EDuration="3.105963177s" podCreationTimestamp="2026-01-23 23:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:59.105921496 +0000 UTC m=+1.233459465" watchObservedRunningTime="2026-01-23 23:53:59.105963177 +0000 UTC m=+1.233501146" Jan 23 23:54:00.541055 sudo[1898]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:00.634571 sshd[1874]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:00.640699 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:54:00.641516 systemd[1]: sshd@6-49.13.80.198:22-20.161.92.111:34768.service: Deactivated successfully. Jan 23 23:54:00.645821 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:54:00.648771 systemd-logind[1575]: Removed session 7. Jan 23 23:54:02.185683 kubelet[2766]: I0123 23:54:02.185594 2766 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:54:02.187392 kubelet[2766]: I0123 23:54:02.186708 2766 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:54:02.187465 containerd[1608]: time="2026-01-23T23:54:02.186474503Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:54:02.886821 kubelet[2766]: I0123 23:54:02.885485 2766 status_manager.go:890] "Failed to get status for pod" podUID="f41481f2-1090-4f05-b730-d42c14c8e883" pod="kube-system/kube-proxy-gwggw" err="pods \"kube-proxy-gwggw\" is forbidden: User \"system:node:ci-4081-3-6-n-417febb2dd\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-417febb2dd' and this object" Jan 23 23:54:02.886821 kubelet[2766]: W0123 23:54:02.885664 2766 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-6-n-417febb2dd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-417febb2dd' and this object Jan 23 23:54:02.886821 kubelet[2766]: E0123 23:54:02.885692 2766 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-6-n-417febb2dd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-417febb2dd' and this object" logger="UnhandledError" Jan 23 23:54:02.886821 kubelet[2766]: W0123 23:54:02.885730 2766 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-n-417febb2dd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-417febb2dd' and this object Jan 23 23:54:02.886821 kubelet[2766]: E0123 23:54:02.885740 2766 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-417febb2dd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-417febb2dd' and this object" logger="UnhandledError" Jan 23 23:54:02.928978 kubelet[2766]: I0123 23:54:02.928943 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f41481f2-1090-4f05-b730-d42c14c8e883-xtables-lock\") pod \"kube-proxy-gwggw\" (UID: \"f41481f2-1090-4f05-b730-d42c14c8e883\") " pod="kube-system/kube-proxy-gwggw" Jan 23 23:54:02.929996 kubelet[2766]: I0123 23:54:02.929850 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-lib-modules\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.929996 kubelet[2766]: I0123 23:54:02.929887 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-bpf-maps\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.929996 kubelet[2766]: I0123 23:54:02.929927 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cni-path\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.929996 kubelet[2766]: I0123 23:54:02.929942 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-config-path\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.930321 kubelet[2766]: I0123 23:54:02.930082 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-net\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.930321 kubelet[2766]: I0123 23:54:02.930101 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jk8\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-kube-api-access-k4jk8\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.930867 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-etc-cni-netd\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.930908 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f41481f2-1090-4f05-b730-d42c14c8e883-kube-proxy\") pod \"kube-proxy-gwggw\" (UID: \"f41481f2-1090-4f05-b730-d42c14c8e883\") " pod="kube-system/kube-proxy-gwggw" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.930945 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-clustermesh-secrets\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.930960 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f41481f2-1090-4f05-b730-d42c14c8e883-lib-modules\") pod \"kube-proxy-gwggw\" (UID: \"f41481f2-1090-4f05-b730-d42c14c8e883\") " pod="kube-system/kube-proxy-gwggw" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.931005 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-cgroup\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932369 kubelet[2766]: I0123 23:54:02.931026 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-xtables-lock\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932600 kubelet[2766]: I0123 23:54:02.931040 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hubble-tls\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.932954 kubelet[2766]: I0123 23:54:02.932927 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-run\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.933447 kubelet[2766]: I0123 23:54:02.933225 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hostproc\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.933447 kubelet[2766]: I0123 23:54:02.933313 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-kernel\") pod \"cilium-5fspj\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " pod="kube-system/cilium-5fspj" Jan 23 23:54:02.933447 kubelet[2766]: I0123 23:54:02.933345 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8tr\" (UniqueName: \"kubernetes.io/projected/f41481f2-1090-4f05-b730-d42c14c8e883-kube-api-access-wl8tr\") pod \"kube-proxy-gwggw\" (UID: \"f41481f2-1090-4f05-b730-d42c14c8e883\") " pod="kube-system/kube-proxy-gwggw" Jan 23 23:54:03.337164 kubelet[2766]: I0123 23:54:03.337018 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvsb\" (UniqueName: \"kubernetes.io/projected/151730a3-0f96-49cd-8b7f-7cee5bb434af-kube-api-access-gmvsb\") pod \"cilium-operator-6c4d7847fc-8j97q\" (UID: \"151730a3-0f96-49cd-8b7f-7cee5bb434af\") " pod="kube-system/cilium-operator-6c4d7847fc-8j97q" Jan 23 23:54:03.337164 kubelet[2766]: I0123 23:54:03.337096 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151730a3-0f96-49cd-8b7f-7cee5bb434af-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8j97q\" (UID: \"151730a3-0f96-49cd-8b7f-7cee5bb434af\") " pod="kube-system/cilium-operator-6c4d7847fc-8j97q" Jan 23 23:54:03.894796 containerd[1608]: time="2026-01-23T23:54:03.891009258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8j97q,Uid:151730a3-0f96-49cd-8b7f-7cee5bb434af,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:03.921640 containerd[1608]: time="2026-01-23T23:54:03.921517544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:03.922168 containerd[1608]: time="2026-01-23T23:54:03.922117318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:03.922262 containerd[1608]: time="2026-01-23T23:54:03.922180999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:03.922388 containerd[1608]: time="2026-01-23T23:54:03.922352804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:03.974592 containerd[1608]: time="2026-01-23T23:54:03.974552565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8j97q,Uid:151730a3-0f96-49cd-8b7f-7cee5bb434af,Namespace:kube-system,Attempt:0,} returns sandbox id \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\"" Jan 23 23:54:03.979773 containerd[1608]: time="2026-01-23T23:54:03.979707727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:54:04.078606 containerd[1608]: time="2026-01-23T23:54:04.078550719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwggw,Uid:f41481f2-1090-4f05-b730-d42c14c8e883,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:04.089232 containerd[1608]: time="2026-01-23T23:54:04.089189406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fspj,Uid:8e11169a-1ed7-49e7-86e5-f3709eda1ae8,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:04.115254 containerd[1608]: time="2026-01-23T23:54:04.110836550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:04.115254 containerd[1608]: time="2026-01-23T23:54:04.110912072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:04.115254 containerd[1608]: time="2026-01-23T23:54:04.110935793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:04.115254 containerd[1608]: time="2026-01-23T23:54:04.111032915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:04.128340 containerd[1608]: time="2026-01-23T23:54:04.128099432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:04.128340 containerd[1608]: time="2026-01-23T23:54:04.128178634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:04.128340 containerd[1608]: time="2026-01-23T23:54:04.128225435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:04.129131 containerd[1608]: time="2026-01-23T23:54:04.128986813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:04.175834 containerd[1608]: time="2026-01-23T23:54:04.175531057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwggw,Uid:f41481f2-1090-4f05-b730-d42c14c8e883,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf9f9020c175b9cbe6332fe47baf5bf87f5ad99817453e020b126fb0b42904c6\"" Jan 23 23:54:04.181006 containerd[1608]: time="2026-01-23T23:54:04.180869181Z" level=info msg="CreateContainer within sandbox \"cf9f9020c175b9cbe6332fe47baf5bf87f5ad99817453e020b126fb0b42904c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:54:04.181600 containerd[1608]: time="2026-01-23T23:54:04.181430874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fspj,Uid:8e11169a-1ed7-49e7-86e5-f3709eda1ae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\"" Jan 23 23:54:04.204338 containerd[1608]: time="2026-01-23T23:54:04.204210884Z" level=info msg="CreateContainer within sandbox \"cf9f9020c175b9cbe6332fe47baf5bf87f5ad99817453e020b126fb0b42904c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6cc06b65b079e1f5a35ca1410592ccf41644a007223c73a85cd8f9eb54a0a286\"" Jan 23 23:54:04.205237 containerd[1608]: time="2026-01-23T23:54:04.205113985Z" level=info msg="StartContainer for \"6cc06b65b079e1f5a35ca1410592ccf41644a007223c73a85cd8f9eb54a0a286\"" Jan 23 23:54:04.262331 containerd[1608]: time="2026-01-23T23:54:04.262279476Z" level=info msg="StartContainer for \"6cc06b65b079e1f5a35ca1410592ccf41644a007223c73a85cd8f9eb54a0a286\" returns successfully" Jan 23 23:54:05.678906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971444199.mount: Deactivated successfully. Jan 23 23:54:06.054700 containerd[1608]: time="2026-01-23T23:54:06.054638852Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:06.056245 containerd[1608]: time="2026-01-23T23:54:06.055919880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:54:06.058782 containerd[1608]: time="2026-01-23T23:54:06.057307672Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:06.059717 containerd[1608]: time="2026-01-23T23:54:06.059677245Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.079866395s" Jan 23 23:54:06.059884 containerd[1608]: time="2026-01-23T23:54:06.059855609Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:54:06.062263 containerd[1608]: time="2026-01-23T23:54:06.062235342Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:54:06.064459 containerd[1608]: time="2026-01-23T23:54:06.064434671Z" level=info msg="CreateContainer within sandbox \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:54:06.087935 containerd[1608]: time="2026-01-23T23:54:06.087750793Z" level=info msg="CreateContainer within sandbox \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\"" Jan 23 23:54:06.089809 containerd[1608]: time="2026-01-23T23:54:06.089768598Z" level=info msg="StartContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\"" Jan 23 23:54:06.116481 systemd[1]: run-containerd-runc-k8s.io-f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0-runc.q8iZah.mount: Deactivated successfully. Jan 23 23:54:06.154720 containerd[1608]: time="2026-01-23T23:54:06.154666451Z" level=info msg="StartContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" returns successfully" Jan 23 23:54:07.093500 kubelet[2766]: I0123 23:54:07.091681 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gwggw" podStartSLOduration=5.091661186 podStartE2EDuration="5.091661186s" podCreationTimestamp="2026-01-23 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:54:05.081542792 +0000 UTC m=+7.209080761" watchObservedRunningTime="2026-01-23 23:54:07.091661186 +0000 UTC m=+9.219199155" Jan 23 23:54:07.093500 kubelet[2766]: I0123 23:54:07.091980 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8j97q" podStartSLOduration=2.007221483 podStartE2EDuration="4.091971833s" podCreationTimestamp="2026-01-23 23:54:03 +0000 UTC" firstStartedPulling="2026-01-23 23:54:03.975960838 +0000 UTC m=+6.103498807" lastFinishedPulling="2026-01-23 23:54:06.060711148 +0000 UTC m=+8.188249157" observedRunningTime="2026-01-23 23:54:07.091438501 +0000 UTC m=+9.218976470" watchObservedRunningTime="2026-01-23 23:54:07.091971833 +0000 UTC m=+9.219509882" Jan 23 23:54:09.740032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276273846.mount: Deactivated successfully. Jan 23 23:54:11.003234 containerd[1608]: time="2026-01-23T23:54:11.003191022Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:11.005100 containerd[1608]: time="2026-01-23T23:54:11.004585011Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:54:11.005100 containerd[1608]: time="2026-01-23T23:54:11.004601131Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:11.006608 containerd[1608]: time="2026-01-23T23:54:11.006568692Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.94392278s" Jan 23 23:54:11.006608 containerd[1608]: time="2026-01-23T23:54:11.006605572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:54:11.010087 containerd[1608]: time="2026-01-23T23:54:11.010060763Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:54:11.022035 containerd[1608]: time="2026-01-23T23:54:11.021997249Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\"" Jan 23 23:54:11.022993 containerd[1608]: time="2026-01-23T23:54:11.022513340Z" level=info msg="StartContainer for \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\"" Jan 23 23:54:11.075803 containerd[1608]: time="2026-01-23T23:54:11.075735355Z" level=info msg="StartContainer for \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\" returns successfully" Jan 23 23:54:11.121620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4-rootfs.mount: Deactivated successfully. Jan 23 23:54:11.309138 containerd[1608]: time="2026-01-23T23:54:11.309069118Z" level=info msg="shim disconnected" id=e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4 namespace=k8s.io Jan 23 23:54:11.309747 containerd[1608]: time="2026-01-23T23:54:11.309504887Z" level=warning msg="cleaning up after shim disconnected" id=e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4 namespace=k8s.io Jan 23 23:54:11.309747 containerd[1608]: time="2026-01-23T23:54:11.309530288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:54:12.107826 containerd[1608]: time="2026-01-23T23:54:12.107090076Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:54:12.129327 containerd[1608]: time="2026-01-23T23:54:12.129034441Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\"" Jan 23 23:54:12.131961 containerd[1608]: time="2026-01-23T23:54:12.131746536Z" level=info msg="StartContainer for \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\"" Jan 23 23:54:12.191496 containerd[1608]: time="2026-01-23T23:54:12.191439867Z" level=info msg="StartContainer for \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\" returns successfully" Jan 23 23:54:12.201569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:12.202568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:12.202637 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:12.211334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:12.234111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961-rootfs.mount: Deactivated successfully. Jan 23 23:54:12.242453 containerd[1608]: time="2026-01-23T23:54:12.242348140Z" level=info msg="shim disconnected" id=6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961 namespace=k8s.io Jan 23 23:54:12.242703 containerd[1608]: time="2026-01-23T23:54:12.242685347Z" level=warning msg="cleaning up after shim disconnected" id=6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961 namespace=k8s.io Jan 23 23:54:12.242789 containerd[1608]: time="2026-01-23T23:54:12.242775389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:54:12.247523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:13.115962 containerd[1608]: time="2026-01-23T23:54:13.115781470Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:54:13.142727 containerd[1608]: time="2026-01-23T23:54:13.142657888Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\"" Jan 23 23:54:13.145051 containerd[1608]: time="2026-01-23T23:54:13.145003695Z" level=info msg="StartContainer for \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\"" Jan 23 23:54:13.215952 containerd[1608]: time="2026-01-23T23:54:13.215909594Z" level=info msg="StartContainer for \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\" returns successfully" Jan 23 23:54:13.240273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056-rootfs.mount: Deactivated successfully. Jan 23 23:54:13.243861 containerd[1608]: time="2026-01-23T23:54:13.243703270Z" level=info msg="shim disconnected" id=aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056 namespace=k8s.io Jan 23 23:54:13.243861 containerd[1608]: time="2026-01-23T23:54:13.243859473Z" level=warning msg="cleaning up after shim disconnected" id=aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056 namespace=k8s.io Jan 23 23:54:13.244108 containerd[1608]: time="2026-01-23T23:54:13.243894274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:54:14.119936 containerd[1608]: time="2026-01-23T23:54:14.119631209Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:54:14.137738 containerd[1608]: time="2026-01-23T23:54:14.137679966Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\"" Jan 23 23:54:14.140566 containerd[1608]: time="2026-01-23T23:54:14.139480762Z" level=info msg="StartContainer for \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\"" Jan 23 23:54:14.197724 containerd[1608]: time="2026-01-23T23:54:14.197679791Z" level=info msg="StartContainer for \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\" returns successfully" Jan 23 23:54:14.219384 containerd[1608]: time="2026-01-23T23:54:14.219298338Z" level=info msg="shim disconnected" id=049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169 namespace=k8s.io Jan 23 23:54:14.219717 containerd[1608]: time="2026-01-23T23:54:14.219686386Z" level=warning msg="cleaning up after shim disconnected" id=049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169 namespace=k8s.io Jan 23 23:54:14.219945 containerd[1608]: time="2026-01-23T23:54:14.219917590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:54:15.127488 containerd[1608]: time="2026-01-23T23:54:15.127266685Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:54:15.133528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169-rootfs.mount: Deactivated successfully. Jan 23 23:54:15.147781 containerd[1608]: time="2026-01-23T23:54:15.147230114Z" level=info msg="CreateContainer within sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\"" Jan 23 23:54:15.150943 containerd[1608]: time="2026-01-23T23:54:15.149945567Z" level=info msg="StartContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\"" Jan 23 23:54:15.214092 containerd[1608]: time="2026-01-23T23:54:15.214030017Z" level=info msg="StartContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" returns successfully" Jan 23 23:54:15.383487 kubelet[2766]: I0123 23:54:15.383084 2766 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:54:15.526262 kubelet[2766]: I0123 23:54:15.525386 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88-config-volume\") pod \"coredns-668d6bf9bc-mnkd7\" (UID: \"51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88\") " pod="kube-system/coredns-668d6bf9bc-mnkd7" Jan 23 23:54:15.526262 kubelet[2766]: I0123 23:54:15.525432 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbca8ced-c189-4abf-b840-32f87fd9e6b8-config-volume\") pod \"coredns-668d6bf9bc-pb8fj\" (UID: \"cbca8ced-c189-4abf-b840-32f87fd9e6b8\") " pod="kube-system/coredns-668d6bf9bc-pb8fj" Jan 23 23:54:15.526262 kubelet[2766]: I0123 23:54:15.525448 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvt2\" (UniqueName: \"kubernetes.io/projected/cbca8ced-c189-4abf-b840-32f87fd9e6b8-kube-api-access-2dvt2\") pod \"coredns-668d6bf9bc-pb8fj\" (UID: \"cbca8ced-c189-4abf-b840-32f87fd9e6b8\") " pod="kube-system/coredns-668d6bf9bc-pb8fj" Jan 23 23:54:15.526262 kubelet[2766]: I0123 23:54:15.525485 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsk8j\" (UniqueName: \"kubernetes.io/projected/51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88-kube-api-access-jsk8j\") pod \"coredns-668d6bf9bc-mnkd7\" (UID: \"51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88\") " pod="kube-system/coredns-668d6bf9bc-mnkd7" Jan 23 23:54:15.730283 containerd[1608]: time="2026-01-23T23:54:15.729341792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb8fj,Uid:cbca8ced-c189-4abf-b840-32f87fd9e6b8,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:15.734166 containerd[1608]: time="2026-01-23T23:54:15.734116125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnkd7,Uid:51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:16.152098 kubelet[2766]: I0123 23:54:16.151997 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5fspj" podStartSLOduration=7.328676832 podStartE2EDuration="14.151965883s" podCreationTimestamp="2026-01-23 23:54:02 +0000 UTC" firstStartedPulling="2026-01-23 23:54:04.184421544 +0000 UTC m=+6.311959513" lastFinishedPulling="2026-01-23 23:54:11.007710635 +0000 UTC m=+13.135248564" observedRunningTime="2026-01-23 23:54:16.15025697 +0000 UTC m=+18.277794939" watchObservedRunningTime="2026-01-23 23:54:16.151965883 +0000 UTC m=+18.279503932" Jan 23 23:54:17.345750 systemd-networkd[1249]: cilium_host: Link UP Jan 23 23:54:17.347150 systemd-networkd[1249]: cilium_net: Link UP Jan 23 23:54:17.347154 systemd-networkd[1249]: cilium_net: Gained carrier Jan 23 23:54:17.347831 systemd-networkd[1249]: cilium_host: Gained carrier Jan 23 23:54:17.348904 systemd-networkd[1249]: cilium_host: Gained IPv6LL Jan 23 23:54:17.456967 systemd-networkd[1249]: cilium_vxlan: Link UP Jan 23 23:54:17.456972 systemd-networkd[1249]: cilium_vxlan: Gained carrier Jan 23 23:54:17.740796 kernel: NET: Registered PF_ALG protocol family Jan 23 23:54:18.184111 systemd-networkd[1249]: cilium_net: Gained IPv6LL Jan 23 23:54:18.469235 systemd-networkd[1249]: lxc_health: Link UP Jan 23 23:54:18.475354 systemd-networkd[1249]: lxc_health: Gained carrier Jan 23 23:54:18.800895 systemd-networkd[1249]: lxcc36cee617c90: Link UP Jan 23 23:54:18.809532 systemd-networkd[1249]: lxcafa7e033ab32: Link UP Jan 23 23:54:18.818281 kernel: eth0: renamed from tmpc05d8 Jan 23 23:54:18.823877 kernel: eth0: renamed from tmp8ed0a Jan 23 23:54:18.833095 systemd-networkd[1249]: lxcc36cee617c90: Gained carrier Jan 23 23:54:18.835990 systemd-networkd[1249]: lxcafa7e033ab32: Gained carrier Jan 23 23:54:19.080962 systemd-networkd[1249]: cilium_vxlan: Gained IPv6LL Jan 23 23:54:20.040020 systemd-networkd[1249]: lxc_health: Gained IPv6LL Jan 23 23:54:20.360480 systemd-networkd[1249]: lxcc36cee617c90: Gained IPv6LL Jan 23 23:54:20.744218 systemd-networkd[1249]: lxcafa7e033ab32: Gained IPv6LL Jan 23 23:54:22.758886 containerd[1608]: time="2026-01-23T23:54:22.756324373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:22.758886 containerd[1608]: time="2026-01-23T23:54:22.756475616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:22.758886 containerd[1608]: time="2026-01-23T23:54:22.756494297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:22.758886 containerd[1608]: time="2026-01-23T23:54:22.757004666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:22.768564 containerd[1608]: time="2026-01-23T23:54:22.767600619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:22.768564 containerd[1608]: time="2026-01-23T23:54:22.767787582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:22.768564 containerd[1608]: time="2026-01-23T23:54:22.767918984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:22.768809 containerd[1608]: time="2026-01-23T23:54:22.768594437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:22.856355 containerd[1608]: time="2026-01-23T23:54:22.855649620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mnkd7,Uid:51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ed0afe9de651c882263da1f5b0c1f0d4190cd585cf81590eaa9efedafa87a4f\"" Jan 23 23:54:22.867534 containerd[1608]: time="2026-01-23T23:54:22.866114010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pb8fj,Uid:cbca8ced-c189-4abf-b840-32f87fd9e6b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c05d88c90ffb0fdb35a631c05ad78a7926f1589d4861a11c2f6fa8563dbbe8fb\"" Jan 23 23:54:22.867534 containerd[1608]: time="2026-01-23T23:54:22.866392415Z" level=info msg="CreateContainer within sandbox \"8ed0afe9de651c882263da1f5b0c1f0d4190cd585cf81590eaa9efedafa87a4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:54:22.882778 containerd[1608]: time="2026-01-23T23:54:22.882200703Z" level=info msg="CreateContainer within sandbox \"c05d88c90ffb0fdb35a631c05ad78a7926f1589d4861a11c2f6fa8563dbbe8fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:54:22.904115 containerd[1608]: time="2026-01-23T23:54:22.903936618Z" level=info msg="CreateContainer within sandbox \"8ed0afe9de651c882263da1f5b0c1f0d4190cd585cf81590eaa9efedafa87a4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2548f190a875fa50f6b66903467dfa6f85cf3fa411652271271f5970a51f551e\"" Jan 23 23:54:22.904482 containerd[1608]: time="2026-01-23T23:54:22.904426507Z" level=info msg="StartContainer for \"2548f190a875fa50f6b66903467dfa6f85cf3fa411652271271f5970a51f551e\"" Jan 23 23:54:22.905135 containerd[1608]: time="2026-01-23T23:54:22.904926076Z" level=info msg="CreateContainer within sandbox \"c05d88c90ffb0fdb35a631c05ad78a7926f1589d4861a11c2f6fa8563dbbe8fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f06763878d8bc73f9df9679a32f3766cefa9814ec3f2753c2e7c94539ef0eb6\"" Jan 23 23:54:22.908159 containerd[1608]: time="2026-01-23T23:54:22.908094373Z" level=info msg="StartContainer for \"3f06763878d8bc73f9df9679a32f3766cefa9814ec3f2753c2e7c94539ef0eb6\"" Jan 23 23:54:22.969573 containerd[1608]: time="2026-01-23T23:54:22.969363848Z" level=info msg="StartContainer for \"2548f190a875fa50f6b66903467dfa6f85cf3fa411652271271f5970a51f551e\" returns successfully" Jan 23 23:54:22.978013 containerd[1608]: time="2026-01-23T23:54:22.977630038Z" level=info msg="StartContainer for \"3f06763878d8bc73f9df9679a32f3766cefa9814ec3f2753c2e7c94539ef0eb6\" returns successfully" Jan 23 23:54:23.199801 kubelet[2766]: I0123 23:54:23.198405 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mnkd7" podStartSLOduration=20.198386224 podStartE2EDuration="20.198386224s" podCreationTimestamp="2026-01-23 23:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:54:23.175508971 +0000 UTC m=+25.303046900" watchObservedRunningTime="2026-01-23 23:54:23.198386224 +0000 UTC m=+25.325924193" Jan 23 23:54:23.233034 kubelet[2766]: I0123 23:54:23.230713 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pb8fj" podStartSLOduration=20.230685846 podStartE2EDuration="20.230685846s" podCreationTimestamp="2026-01-23 23:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:54:23.203091149 +0000 UTC m=+25.330629158" watchObservedRunningTime="2026-01-23 23:54:23.230685846 +0000 UTC m=+25.358223855" Jan 23 23:56:17.127091 systemd[1]: Started sshd@7-49.13.80.198:22-20.161.92.111:54242.service - OpenSSH per-connection server daemon (20.161.92.111:54242). Jan 23 23:56:17.722426 sshd[4147]: Accepted publickey for core from 20.161.92.111 port 54242 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:17.724576 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:17.729804 systemd-logind[1575]: New session 8 of user core. Jan 23 23:56:17.737302 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:56:18.233176 sshd[4147]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:18.238349 systemd[1]: sshd@7-49.13.80.198:22-20.161.92.111:54242.service: Deactivated successfully. Jan 23 23:56:18.242681 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:56:18.244962 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:56:18.246446 systemd-logind[1575]: Removed session 8. Jan 23 23:56:23.342216 systemd[1]: Started sshd@8-49.13.80.198:22-20.161.92.111:51314.service - OpenSSH per-connection server daemon (20.161.92.111:51314). Jan 23 23:56:23.945531 sshd[4163]: Accepted publickey for core from 20.161.92.111 port 51314 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:23.947695 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:23.952820 systemd-logind[1575]: New session 9 of user core. Jan 23 23:56:23.957344 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:56:24.456403 sshd[4163]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:24.466185 systemd[1]: sshd@8-49.13.80.198:22-20.161.92.111:51314.service: Deactivated successfully. Jan 23 23:56:24.469726 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:56:24.471163 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:56:24.472407 systemd-logind[1575]: Removed session 9. Jan 23 23:56:29.567237 systemd[1]: Started sshd@9-49.13.80.198:22-20.161.92.111:51326.service - OpenSSH per-connection server daemon (20.161.92.111:51326). Jan 23 23:56:30.189488 sshd[4177]: Accepted publickey for core from 20.161.92.111 port 51326 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:30.191885 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:30.197359 systemd-logind[1575]: New session 10 of user core. Jan 23 23:56:30.204215 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:56:30.718183 sshd[4177]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:30.723416 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:56:30.724555 systemd[1]: sshd@9-49.13.80.198:22-20.161.92.111:51326.service: Deactivated successfully. Jan 23 23:56:30.731152 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:56:30.732970 systemd-logind[1575]: Removed session 10. Jan 23 23:56:30.820177 systemd[1]: Started sshd@10-49.13.80.198:22-20.161.92.111:51338.service - OpenSSH per-connection server daemon (20.161.92.111:51338). Jan 23 23:56:31.405261 sshd[4192]: Accepted publickey for core from 20.161.92.111 port 51338 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:31.407680 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:31.414235 systemd-logind[1575]: New session 11 of user core. Jan 23 23:56:31.419158 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:31.934560 sshd[4192]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:31.942695 systemd[1]: sshd@10-49.13.80.198:22-20.161.92.111:51338.service: Deactivated successfully. Jan 23 23:56:31.946423 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:31.947451 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:31.948521 systemd-logind[1575]: Removed session 11. Jan 23 23:56:32.040117 systemd[1]: Started sshd@11-49.13.80.198:22-20.161.92.111:51346.service - OpenSSH per-connection server daemon (20.161.92.111:51346). Jan 23 23:56:32.625250 sshd[4204]: Accepted publickey for core from 20.161.92.111 port 51346 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:32.627549 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:32.633335 systemd-logind[1575]: New session 12 of user core. Jan 23 23:56:32.642305 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:56:33.128110 sshd[4204]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:33.134147 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:56:33.134907 systemd[1]: sshd@11-49.13.80.198:22-20.161.92.111:51346.service: Deactivated successfully. Jan 23 23:56:33.138408 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:56:33.141688 systemd-logind[1575]: Removed session 12. Jan 23 23:56:38.231967 systemd[1]: Started sshd@12-49.13.80.198:22-20.161.92.111:42666.service - OpenSSH per-connection server daemon (20.161.92.111:42666). Jan 23 23:56:38.828829 sshd[4219]: Accepted publickey for core from 20.161.92.111 port 42666 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:38.830850 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:38.836851 systemd-logind[1575]: New session 13 of user core. Jan 23 23:56:38.843277 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:56:39.330013 sshd[4219]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:39.336307 systemd[1]: sshd@12-49.13.80.198:22-20.161.92.111:42666.service: Deactivated successfully. Jan 23 23:56:39.339296 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:56:39.340249 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:56:39.341175 systemd-logind[1575]: Removed session 13. Jan 23 23:56:44.434062 systemd[1]: Started sshd@13-49.13.80.198:22-20.161.92.111:51378.service - OpenSSH per-connection server daemon (20.161.92.111:51378). Jan 23 23:56:45.018647 sshd[4233]: Accepted publickey for core from 20.161.92.111 port 51378 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:45.020751 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:45.026865 systemd-logind[1575]: New session 14 of user core. Jan 23 23:56:45.034308 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:56:45.503846 sshd[4233]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:45.509285 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:56:45.511502 systemd[1]: sshd@13-49.13.80.198:22-20.161.92.111:51378.service: Deactivated successfully. Jan 23 23:56:45.514067 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:56:45.515244 systemd-logind[1575]: Removed session 14. Jan 23 23:56:45.604332 systemd[1]: Started sshd@14-49.13.80.198:22-20.161.92.111:51392.service - OpenSSH per-connection server daemon (20.161.92.111:51392). Jan 23 23:56:46.188636 sshd[4246]: Accepted publickey for core from 20.161.92.111 port 51392 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:46.190804 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:46.196907 systemd-logind[1575]: New session 15 of user core. Jan 23 23:56:46.202282 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:56:46.753199 sshd[4246]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:46.759367 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:56:46.760232 systemd[1]: sshd@14-49.13.80.198:22-20.161.92.111:51392.service: Deactivated successfully. Jan 23 23:56:46.765080 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:56:46.766475 systemd-logind[1575]: Removed session 15. Jan 23 23:56:46.857417 systemd[1]: Started sshd@15-49.13.80.198:22-20.161.92.111:51402.service - OpenSSH per-connection server daemon (20.161.92.111:51402). Jan 23 23:56:47.456907 sshd[4258]: Accepted publickey for core from 20.161.92.111 port 51402 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:47.459381 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:47.466081 systemd-logind[1575]: New session 16 of user core. Jan 23 23:56:47.477358 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:56:48.592140 sshd[4258]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:48.595808 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:56:48.597386 systemd[1]: sshd@15-49.13.80.198:22-20.161.92.111:51402.service: Deactivated successfully. Jan 23 23:56:48.602864 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:56:48.604622 systemd-logind[1575]: Removed session 16. Jan 23 23:56:48.697076 systemd[1]: Started sshd@16-49.13.80.198:22-20.161.92.111:51410.service - OpenSSH per-connection server daemon (20.161.92.111:51410). Jan 23 23:56:49.295748 sshd[4277]: Accepted publickey for core from 20.161.92.111 port 51410 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:49.298578 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:49.306208 systemd-logind[1575]: New session 17 of user core. Jan 23 23:56:49.315171 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:56:49.906142 sshd[4277]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:49.912175 systemd[1]: sshd@16-49.13.80.198:22-20.161.92.111:51410.service: Deactivated successfully. Jan 23 23:56:49.915305 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:56:49.915597 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:56:49.917685 systemd-logind[1575]: Removed session 17. Jan 23 23:56:50.005749 systemd[1]: Started sshd@17-49.13.80.198:22-20.161.92.111:51426.service - OpenSSH per-connection server daemon (20.161.92.111:51426). Jan 23 23:56:50.591277 sshd[4289]: Accepted publickey for core from 20.161.92.111 port 51426 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:50.594057 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:50.601407 systemd-logind[1575]: New session 18 of user core. Jan 23 23:56:50.606104 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:56:51.075177 sshd[4289]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:51.079692 systemd[1]: sshd@17-49.13.80.198:22-20.161.92.111:51426.service: Deactivated successfully. Jan 23 23:56:51.085934 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:56:51.088552 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:56:51.090140 systemd-logind[1575]: Removed session 18. Jan 23 23:56:56.184163 systemd[1]: Started sshd@18-49.13.80.198:22-20.161.92.111:32772.service - OpenSSH per-connection server daemon (20.161.92.111:32772). Jan 23 23:56:56.772397 sshd[4305]: Accepted publickey for core from 20.161.92.111 port 32772 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:56:56.775508 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:56.782005 systemd-logind[1575]: New session 19 of user core. Jan 23 23:56:56.789933 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:56:57.277907 sshd[4305]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:57.284140 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:56:57.284903 systemd[1]: sshd@18-49.13.80.198:22-20.161.92.111:32772.service: Deactivated successfully. Jan 23 23:56:57.289751 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:56:57.291323 systemd-logind[1575]: Removed session 19. Jan 23 23:57:02.382326 systemd[1]: Started sshd@19-49.13.80.198:22-20.161.92.111:32788.service - OpenSSH per-connection server daemon (20.161.92.111:32788). Jan 23 23:57:02.978600 sshd[4321]: Accepted publickey for core from 20.161.92.111 port 32788 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:02.980495 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:02.985404 systemd-logind[1575]: New session 20 of user core. Jan 23 23:57:02.997803 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:57:03.469139 sshd[4321]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:03.476087 systemd[1]: sshd@19-49.13.80.198:22-20.161.92.111:32788.service: Deactivated successfully. Jan 23 23:57:03.482879 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:57:03.485692 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:57:03.486707 systemd-logind[1575]: Removed session 20. Jan 23 23:57:08.572084 systemd[1]: Started sshd@20-49.13.80.198:22-20.161.92.111:32986.service - OpenSSH per-connection server daemon (20.161.92.111:32986). Jan 23 23:57:09.159636 sshd[4337]: Accepted publickey for core from 20.161.92.111 port 32986 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:09.163998 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:09.170825 systemd-logind[1575]: New session 21 of user core. Jan 23 23:57:09.177274 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:57:09.646893 sshd[4337]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:09.652164 systemd[1]: sshd@20-49.13.80.198:22-20.161.92.111:32986.service: Deactivated successfully. Jan 23 23:57:09.657328 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:57:09.658227 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:57:09.659103 systemd-logind[1575]: Removed session 21. Jan 23 23:57:09.757113 systemd[1]: Started sshd@21-49.13.80.198:22-20.161.92.111:33000.service - OpenSSH per-connection server daemon (20.161.92.111:33000). Jan 23 23:57:10.374862 sshd[4350]: Accepted publickey for core from 20.161.92.111 port 33000 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:10.377440 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:10.384778 systemd-logind[1575]: New session 22 of user core. Jan 23 23:57:10.388015 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:57:12.649801 containerd[1608]: time="2026-01-23T23:57:12.648794971Z" level=info msg="StopContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" with timeout 30 (s)" Jan 23 23:57:12.654254 containerd[1608]: time="2026-01-23T23:57:12.653108707Z" level=info msg="Stop container \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" with signal terminated" Jan 23 23:57:12.707476 containerd[1608]: time="2026-01-23T23:57:12.707232738Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:57:12.718292 containerd[1608]: time="2026-01-23T23:57:12.718042920Z" level=info msg="StopContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" with timeout 2 (s)" Jan 23 23:57:12.718874 containerd[1608]: time="2026-01-23T23:57:12.718787410Z" level=info msg="Stop container \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" with signal terminated" Jan 23 23:57:12.729513 systemd-networkd[1249]: lxc_health: Link DOWN Jan 23 23:57:12.729520 systemd-networkd[1249]: lxc_health: Lost carrier Jan 23 23:57:12.751424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0-rootfs.mount: Deactivated successfully. Jan 23 23:57:12.768631 containerd[1608]: time="2026-01-23T23:57:12.768564184Z" level=info msg="shim disconnected" id=f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0 namespace=k8s.io Jan 23 23:57:12.768916 containerd[1608]: time="2026-01-23T23:57:12.768850787Z" level=warning msg="cleaning up after shim disconnected" id=f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0 namespace=k8s.io Jan 23 23:57:12.768916 containerd[1608]: time="2026-01-23T23:57:12.768872708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:12.790444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a-rootfs.mount: Deactivated successfully. Jan 23 23:57:12.794812 containerd[1608]: time="2026-01-23T23:57:12.794662926Z" level=info msg="shim disconnected" id=41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a namespace=k8s.io Jan 23 23:57:12.794812 containerd[1608]: time="2026-01-23T23:57:12.794715247Z" level=warning msg="cleaning up after shim disconnected" id=41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a namespace=k8s.io Jan 23 23:57:12.794812 containerd[1608]: time="2026-01-23T23:57:12.794729807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:12.800381 containerd[1608]: time="2026-01-23T23:57:12.800240240Z" level=info msg="StopContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" returns successfully" Jan 23 23:57:12.801927 containerd[1608]: time="2026-01-23T23:57:12.801858381Z" level=info msg="StopPodSandbox for \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\"" Jan 23 23:57:12.802101 containerd[1608]: time="2026-01-23T23:57:12.802038983Z" level=info msg="Container to stop \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.805301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133-shm.mount: Deactivated successfully. Jan 23 23:57:12.825933 containerd[1608]: time="2026-01-23T23:57:12.825889256Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:57:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:57:12.831306 containerd[1608]: time="2026-01-23T23:57:12.831191246Z" level=info msg="StopContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" returns successfully" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831825774Z" level=info msg="StopPodSandbox for \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\"" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831861575Z" level=info msg="Container to stop \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831872895Z" level=info msg="Container to stop \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831882095Z" level=info msg="Container to stop \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831891615Z" level=info msg="Container to stop \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.832786 containerd[1608]: time="2026-01-23T23:57:12.831901135Z" level=info msg="Container to stop \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:57:12.834679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07-shm.mount: Deactivated successfully. Jan 23 23:57:12.858530 containerd[1608]: time="2026-01-23T23:57:12.858459804Z" level=info msg="shim disconnected" id=1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133 namespace=k8s.io Jan 23 23:57:12.858530 containerd[1608]: time="2026-01-23T23:57:12.858509005Z" level=warning msg="cleaning up after shim disconnected" id=1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133 namespace=k8s.io Jan 23 23:57:12.858994 containerd[1608]: time="2026-01-23T23:57:12.858875530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:12.881842 containerd[1608]: time="2026-01-23T23:57:12.881802311Z" level=info msg="TearDown network for sandbox \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\" successfully" Jan 23 23:57:12.882437 containerd[1608]: time="2026-01-23T23:57:12.882294677Z" level=info msg="StopPodSandbox for \"1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133\" returns successfully" Jan 23 23:57:12.886880 containerd[1608]: time="2026-01-23T23:57:12.885123114Z" level=info msg="shim disconnected" id=1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07 namespace=k8s.io Jan 23 23:57:12.886880 containerd[1608]: time="2026-01-23T23:57:12.885896844Z" level=warning msg="cleaning up after shim disconnected" id=1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07 namespace=k8s.io Jan 23 23:57:12.886880 containerd[1608]: time="2026-01-23T23:57:12.885908565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:12.904419 containerd[1608]: time="2026-01-23T23:57:12.904312726Z" level=info msg="TearDown network for sandbox \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" successfully" Jan 23 23:57:12.904548 containerd[1608]: time="2026-01-23T23:57:12.904529889Z" level=info msg="StopPodSandbox for \"1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07\" returns successfully" Jan 23 23:57:13.079498 kubelet[2766]: I0123 23:57:13.079449 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079346 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-run\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079564 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-lib-modules\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079607 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4jk8\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-kube-api-access-k4jk8\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079643 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hostproc\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079678 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-kernel\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080038 kubelet[2766]: I0123 23:57:13.079707 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-cgroup\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079740 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hubble-tls\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079806 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-xtables-lock\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079843 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmvsb\" (UniqueName: \"kubernetes.io/projected/151730a3-0f96-49cd-8b7f-7cee5bb434af-kube-api-access-gmvsb\") pod \"151730a3-0f96-49cd-8b7f-7cee5bb434af\" (UID: \"151730a3-0f96-49cd-8b7f-7cee5bb434af\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079873 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cni-path\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079906 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-bpf-maps\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080322 kubelet[2766]: I0123 23:57:13.079943 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151730a3-0f96-49cd-8b7f-7cee5bb434af-cilium-config-path\") pod \"151730a3-0f96-49cd-8b7f-7cee5bb434af\" (UID: \"151730a3-0f96-49cd-8b7f-7cee5bb434af\") " Jan 23 23:57:13.080603 kubelet[2766]: I0123 23:57:13.079978 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-config-path\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080603 kubelet[2766]: I0123 23:57:13.080007 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-net\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080603 kubelet[2766]: I0123 23:57:13.080037 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-etc-cni-netd\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080603 kubelet[2766]: I0123 23:57:13.080092 2766 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-clustermesh-secrets\") pod \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\" (UID: \"8e11169a-1ed7-49e7-86e5-f3709eda1ae8\") " Jan 23 23:57:13.080603 kubelet[2766]: I0123 23:57:13.080161 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-run\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.080800 kubelet[2766]: I0123 23:57:13.080627 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.080800 kubelet[2766]: I0123 23:57:13.080675 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.082457 kubelet[2766]: I0123 23:57:13.082396 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.082736 kubelet[2766]: I0123 23:57:13.082610 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.082736 kubelet[2766]: I0123 23:57:13.082662 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.086584 kubelet[2766]: I0123 23:57:13.086268 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.086584 kubelet[2766]: I0123 23:57:13.086325 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.086584 kubelet[2766]: I0123 23:57:13.086417 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:57:13.086584 kubelet[2766]: I0123 23:57:13.086507 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-kube-api-access-k4jk8" (OuterVolumeSpecName: "kube-api-access-k4jk8") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "kube-api-access-k4jk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:57:13.086584 kubelet[2766]: I0123 23:57:13.086551 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.086820 kubelet[2766]: I0123 23:57:13.086575 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:57:13.089607 kubelet[2766]: I0123 23:57:13.089571 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151730a3-0f96-49cd-8b7f-7cee5bb434af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "151730a3-0f96-49cd-8b7f-7cee5bb434af" (UID: "151730a3-0f96-49cd-8b7f-7cee5bb434af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:57:13.089937 kubelet[2766]: I0123 23:57:13.089700 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:57:13.090314 kubelet[2766]: I0123 23:57:13.090258 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151730a3-0f96-49cd-8b7f-7cee5bb434af-kube-api-access-gmvsb" (OuterVolumeSpecName: "kube-api-access-gmvsb") pod "151730a3-0f96-49cd-8b7f-7cee5bb434af" (UID: "151730a3-0f96-49cd-8b7f-7cee5bb434af"). InnerVolumeSpecName "kube-api-access-gmvsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:57:13.090314 kubelet[2766]: I0123 23:57:13.090285 2766 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e11169a-1ed7-49e7-86e5-f3709eda1ae8" (UID: "8e11169a-1ed7-49e7-86e5-f3709eda1ae8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:57:13.146845 kubelet[2766]: E0123 23:57:13.146588 2766 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180780 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-net\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180829 2766 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-etc-cni-netd\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180847 2766 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-clustermesh-secrets\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180861 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-config-path\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180877 2766 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-lib-modules\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180890 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4jk8\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-kube-api-access-k4jk8\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.180908 kubelet[2766]: I0123 23:57:13.180904 2766 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hostproc\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180919 2766 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180933 2766 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-hubble-tls\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180949 2766 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-xtables-lock\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180962 2766 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmvsb\" (UniqueName: \"kubernetes.io/projected/151730a3-0f96-49cd-8b7f-7cee5bb434af-kube-api-access-gmvsb\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180977 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cilium-cgroup\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.180990 2766 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-cni-path\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.181003 2766 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151730a3-0f96-49cd-8b7f-7cee5bb434af-cilium-config-path\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.181643 kubelet[2766]: I0123 23:57:13.181017 2766 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e11169a-1ed7-49e7-86e5-f3709eda1ae8-bpf-maps\") on node \"ci-4081-3-6-n-417febb2dd\" DevicePath \"\"" Jan 23 23:57:13.589435 kubelet[2766]: I0123 23:57:13.587863 2766 scope.go:117] "RemoveContainer" containerID="f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0" Jan 23 23:57:13.590966 containerd[1608]: time="2026-01-23T23:57:13.590935190Z" level=info msg="RemoveContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\"" Jan 23 23:57:13.597142 containerd[1608]: time="2026-01-23T23:57:13.596922909Z" level=info msg="RemoveContainer for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" returns successfully" Jan 23 23:57:13.598722 kubelet[2766]: I0123 23:57:13.598691 2766 scope.go:117] "RemoveContainer" containerID="f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0" Jan 23 23:57:13.599360 containerd[1608]: time="2026-01-23T23:57:13.599326061Z" level=error msg="ContainerStatus for \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\": not found" Jan 23 23:57:13.599719 kubelet[2766]: E0123 23:57:13.599694 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\": not found" containerID="f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0" Jan 23 23:57:13.600483 kubelet[2766]: I0123 23:57:13.599723 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0"} err="failed to get container status \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4b50a622ddf8c39650a385ac7e99d7bc5a95a628fa4ddab11ab83b284124da0\": not found" Jan 23 23:57:13.600551 kubelet[2766]: I0123 23:57:13.600489 2766 scope.go:117] "RemoveContainer" containerID="41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a" Jan 23 23:57:13.602849 containerd[1608]: time="2026-01-23T23:57:13.602826347Z" level=info msg="RemoveContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\"" Jan 23 23:57:13.607002 containerd[1608]: time="2026-01-23T23:57:13.606976001Z" level=info msg="RemoveContainer for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" returns successfully" Jan 23 23:57:13.607282 kubelet[2766]: I0123 23:57:13.607265 2766 scope.go:117] "RemoveContainer" containerID="049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169" Jan 23 23:57:13.608474 containerd[1608]: time="2026-01-23T23:57:13.608448980Z" level=info msg="RemoveContainer for \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\"" Jan 23 23:57:13.612797 containerd[1608]: time="2026-01-23T23:57:13.612742357Z" level=info msg="RemoveContainer for \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\" returns successfully" Jan 23 23:57:13.613090 kubelet[2766]: I0123 23:57:13.612952 2766 scope.go:117] "RemoveContainer" containerID="aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056" Jan 23 23:57:13.616927 containerd[1608]: time="2026-01-23T23:57:13.616902012Z" level=info msg="RemoveContainer for \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\"" Jan 23 23:57:13.620388 containerd[1608]: time="2026-01-23T23:57:13.620360217Z" level=info msg="RemoveContainer for \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\" returns successfully" Jan 23 23:57:13.620640 kubelet[2766]: I0123 23:57:13.620522 2766 scope.go:117] "RemoveContainer" containerID="6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961" Jan 23 23:57:13.622458 containerd[1608]: time="2026-01-23T23:57:13.622436564Z" level=info msg="RemoveContainer for \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\"" Jan 23 23:57:13.627165 containerd[1608]: time="2026-01-23T23:57:13.627015464Z" level=info msg="RemoveContainer for \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\" returns successfully" Jan 23 23:57:13.627382 kubelet[2766]: I0123 23:57:13.627315 2766 scope.go:117] "RemoveContainer" containerID="e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4" Jan 23 23:57:13.630191 containerd[1608]: time="2026-01-23T23:57:13.629781501Z" level=info msg="RemoveContainer for \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\"" Jan 23 23:57:13.632981 containerd[1608]: time="2026-01-23T23:57:13.632954343Z" level=info msg="RemoveContainer for \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\" returns successfully" Jan 23 23:57:13.633196 kubelet[2766]: I0123 23:57:13.633173 2766 scope.go:117] "RemoveContainer" containerID="41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a" Jan 23 23:57:13.633390 containerd[1608]: time="2026-01-23T23:57:13.633359988Z" level=error msg="ContainerStatus for \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\": not found" Jan 23 23:57:13.633599 kubelet[2766]: E0123 23:57:13.633498 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\": not found" containerID="41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a" Jan 23 23:57:13.633599 kubelet[2766]: I0123 23:57:13.633525 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a"} err="failed to get container status \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"41db4dd3dba958ec078ca5600612b4bb89e166c2339a94e0243b144489cecb6a\": not found" Jan 23 23:57:13.633599 kubelet[2766]: I0123 23:57:13.633547 2766 scope.go:117] "RemoveContainer" containerID="049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169" Jan 23 23:57:13.633886 containerd[1608]: time="2026-01-23T23:57:13.633813434Z" level=error msg="ContainerStatus for \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\": not found" Jan 23 23:57:13.633983 kubelet[2766]: E0123 23:57:13.633935 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\": not found" containerID="049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169" Jan 23 23:57:13.633983 kubelet[2766]: I0123 23:57:13.633957 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169"} err="failed to get container status \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\": rpc error: code = NotFound desc = an error occurred when try to find container \"049d96e29a6be27c22d9bf0492c7081e9d4d55f1673a829e6c80202249d6a169\": not found" Jan 23 23:57:13.633983 kubelet[2766]: I0123 23:57:13.633974 2766 scope.go:117] "RemoveContainer" containerID="aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056" Jan 23 23:57:13.634211 containerd[1608]: time="2026-01-23T23:57:13.634180799Z" level=error msg="ContainerStatus for \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\": not found" Jan 23 23:57:13.634365 kubelet[2766]: E0123 23:57:13.634318 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\": not found" containerID="aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056" Jan 23 23:57:13.634543 kubelet[2766]: I0123 23:57:13.634466 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056"} err="failed to get container status \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa84136b4ae0b9898d6957f2ffbf0e5a61ea64f95b61c86dd249198ed35b9056\": not found" Jan 23 23:57:13.634543 kubelet[2766]: I0123 23:57:13.634500 2766 scope.go:117] "RemoveContainer" containerID="6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961" Jan 23 23:57:13.634803 containerd[1608]: time="2026-01-23T23:57:13.634775566Z" level=error msg="ContainerStatus for \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\": not found" Jan 23 23:57:13.634936 kubelet[2766]: E0123 23:57:13.634914 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\": not found" containerID="6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961" Jan 23 23:57:13.634981 kubelet[2766]: I0123 23:57:13.634940 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961"} err="failed to get container status \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c717c02902da4577e22f972b0ecbed610e41dc7922287e5180a31b4e9150961\": not found" Jan 23 23:57:13.634981 kubelet[2766]: I0123 23:57:13.634959 2766 scope.go:117] "RemoveContainer" containerID="e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4" Jan 23 23:57:13.635205 containerd[1608]: time="2026-01-23T23:57:13.635173252Z" level=error msg="ContainerStatus for \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\": not found" Jan 23 23:57:13.635399 kubelet[2766]: E0123 23:57:13.635334 2766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\": not found" containerID="e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4" Jan 23 23:57:13.635557 kubelet[2766]: I0123 23:57:13.635486 2766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4"} err="failed to get container status \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e47ef78248e058929a8c6cd42bf6a1cbc94e18a339b2eee482aee88c52d4b7b4\": not found" Jan 23 23:57:13.665711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1af47b5d214707ab21e241ea18ba3296e134a0c74b03a8316529187a22fc7b07-rootfs.mount: Deactivated successfully. Jan 23 23:57:13.665893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1359f70fcf7c208a117845cff2811933315b2617df6952e849e3186bf704b133-rootfs.mount: Deactivated successfully. Jan 23 23:57:13.665995 systemd[1]: var-lib-kubelet-pods-8e11169a\x2d1ed7\x2d49e7\x2d86e5\x2df3709eda1ae8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4jk8.mount: Deactivated successfully. Jan 23 23:57:13.666102 systemd[1]: var-lib-kubelet-pods-151730a3\x2d0f96\x2d49cd\x2d8b7f\x2d7cee5bb434af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmvsb.mount: Deactivated successfully. Jan 23 23:57:13.666204 systemd[1]: var-lib-kubelet-pods-8e11169a\x2d1ed7\x2d49e7\x2d86e5\x2df3709eda1ae8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:57:13.666302 systemd[1]: var-lib-kubelet-pods-8e11169a\x2d1ed7\x2d49e7\x2d86e5\x2df3709eda1ae8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:57:14.011225 kubelet[2766]: I0123 23:57:14.011052 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151730a3-0f96-49cd-8b7f-7cee5bb434af" path="/var/lib/kubelet/pods/151730a3-0f96-49cd-8b7f-7cee5bb434af/volumes" Jan 23 23:57:14.012303 kubelet[2766]: I0123 23:57:14.012259 2766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e11169a-1ed7-49e7-86e5-f3709eda1ae8" path="/var/lib/kubelet/pods/8e11169a-1ed7-49e7-86e5-f3709eda1ae8/volumes" Jan 23 23:57:14.665210 sshd[4350]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:14.672378 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:57:14.673711 systemd[1]: sshd@21-49.13.80.198:22-20.161.92.111:33000.service: Deactivated successfully. Jan 23 23:57:14.677658 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:57:14.679801 systemd-logind[1575]: Removed session 22. Jan 23 23:57:14.778215 systemd[1]: Started sshd@22-49.13.80.198:22-20.161.92.111:38742.service - OpenSSH per-connection server daemon (20.161.92.111:38742). Jan 23 23:57:15.394423 sshd[4521]: Accepted publickey for core from 20.161.92.111 port 38742 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:15.397128 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:15.403395 systemd-logind[1575]: New session 23 of user core. Jan 23 23:57:15.417298 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:57:17.130149 kubelet[2766]: I0123 23:57:17.129610 2766 memory_manager.go:355] "RemoveStaleState removing state" podUID="151730a3-0f96-49cd-8b7f-7cee5bb434af" containerName="cilium-operator" Jan 23 23:57:17.130149 kubelet[2766]: I0123 23:57:17.129654 2766 memory_manager.go:355] "RemoveStaleState removing state" podUID="8e11169a-1ed7-49e7-86e5-f3709eda1ae8" containerName="cilium-agent" Jan 23 23:57:17.211882 sshd[4521]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:17.217986 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:57:17.218494 systemd[1]: sshd@22-49.13.80.198:22-20.161.92.111:38742.service: Deactivated successfully. Jan 23 23:57:17.224132 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:57:17.226935 systemd-logind[1575]: Removed session 23. Jan 23 23:57:17.308527 kubelet[2766]: I0123 23:57:17.308463 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-cilium-cgroup\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.308712 kubelet[2766]: I0123 23:57:17.308544 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-cilium-run\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.308712 kubelet[2766]: I0123 23:57:17.308588 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-lib-modules\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.308712 kubelet[2766]: I0123 23:57:17.308621 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-host-proc-sys-net\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.308712 kubelet[2766]: I0123 23:57:17.308658 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-bpf-maps\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.308712 kubelet[2766]: I0123 23:57:17.308692 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-cni-path\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308726 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-hostproc\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308790 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b659a0ff-adef-479d-865e-8106a53f2a80-clustermesh-secrets\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308827 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b659a0ff-adef-479d-865e-8106a53f2a80-hubble-tls\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308868 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b659a0ff-adef-479d-865e-8106a53f2a80-cilium-ipsec-secrets\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308905 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4wnc\" (UniqueName: \"kubernetes.io/projected/b659a0ff-adef-479d-865e-8106a53f2a80-kube-api-access-h4wnc\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309129 kubelet[2766]: I0123 23:57:17.308956 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-xtables-lock\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309576 kubelet[2766]: I0123 23:57:17.308988 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-host-proc-sys-kernel\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309576 kubelet[2766]: I0123 23:57:17.309023 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b659a0ff-adef-479d-865e-8106a53f2a80-etc-cni-netd\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.309576 kubelet[2766]: I0123 23:57:17.309058 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b659a0ff-adef-479d-865e-8106a53f2a80-cilium-config-path\") pod \"cilium-76w6c\" (UID: \"b659a0ff-adef-479d-865e-8106a53f2a80\") " pod="kube-system/cilium-76w6c" Jan 23 23:57:17.322569 systemd[1]: Started sshd@23-49.13.80.198:22-20.161.92.111:38756.service - OpenSSH per-connection server daemon (20.161.92.111:38756). Jan 23 23:57:17.440866 containerd[1608]: time="2026-01-23T23:57:17.439980755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76w6c,Uid:b659a0ff-adef-479d-865e-8106a53f2a80,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:17.463608 containerd[1608]: time="2026-01-23T23:57:17.463298463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:17.463608 containerd[1608]: time="2026-01-23T23:57:17.463352984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:17.463608 containerd[1608]: time="2026-01-23T23:57:17.463437745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.464261 containerd[1608]: time="2026-01-23T23:57:17.464185315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.505781 containerd[1608]: time="2026-01-23T23:57:17.505714462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76w6c,Uid:b659a0ff-adef-479d-865e-8106a53f2a80,Namespace:kube-system,Attempt:0,} returns sandbox id \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\"" Jan 23 23:57:17.509792 containerd[1608]: time="2026-01-23T23:57:17.509713315Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:57:17.520296 containerd[1608]: time="2026-01-23T23:57:17.520251814Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f29b54e1043b2b830a363e8573645074b11c6782c51ffad2cc30ea471badc2fa\"" Jan 23 23:57:17.521150 containerd[1608]: time="2026-01-23T23:57:17.521101825Z" level=info msg="StartContainer for \"f29b54e1043b2b830a363e8573645074b11c6782c51ffad2cc30ea471badc2fa\"" Jan 23 23:57:17.582017 containerd[1608]: time="2026-01-23T23:57:17.581966668Z" level=info msg="StartContainer for \"f29b54e1043b2b830a363e8573645074b11c6782c51ffad2cc30ea471badc2fa\" returns successfully" Jan 23 23:57:17.646734 containerd[1608]: time="2026-01-23T23:57:17.646651722Z" level=info msg="shim disconnected" id=f29b54e1043b2b830a363e8573645074b11c6782c51ffad2cc30ea471badc2fa namespace=k8s.io Jan 23 23:57:17.646734 containerd[1608]: time="2026-01-23T23:57:17.646728123Z" level=warning msg="cleaning up after shim disconnected" id=f29b54e1043b2b830a363e8573645074b11c6782c51ffad2cc30ea471badc2fa namespace=k8s.io Jan 23 23:57:17.646734 containerd[1608]: time="2026-01-23T23:57:17.646741963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:17.942890 sshd[4534]: Accepted publickey for core from 20.161.92.111 port 38756 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:17.945032 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:17.953860 systemd-logind[1575]: New session 24 of user core. Jan 23 23:57:17.958198 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:57:18.147982 kubelet[2766]: E0123 23:57:18.147804 2766 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:57:18.383486 sshd[4534]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:18.391052 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:57:18.391911 systemd[1]: sshd@23-49.13.80.198:22-20.161.92.111:38756.service: Deactivated successfully. Jan 23 23:57:18.396178 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:57:18.398823 systemd-logind[1575]: Removed session 24. Jan 23 23:57:18.486205 systemd[1]: Started sshd@24-49.13.80.198:22-20.161.92.111:38760.service - OpenSSH per-connection server daemon (20.161.92.111:38760). Jan 23 23:57:18.619419 containerd[1608]: time="2026-01-23T23:57:18.618933714Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:57:18.644336 containerd[1608]: time="2026-01-23T23:57:18.644218808Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691\"" Jan 23 23:57:18.648700 containerd[1608]: time="2026-01-23T23:57:18.648064779Z" level=info msg="StartContainer for \"f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691\"" Jan 23 23:57:18.710893 containerd[1608]: time="2026-01-23T23:57:18.710846008Z" level=info msg="StartContainer for \"f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691\" returns successfully" Jan 23 23:57:18.743394 containerd[1608]: time="2026-01-23T23:57:18.743308476Z" level=info msg="shim disconnected" id=f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691 namespace=k8s.io Jan 23 23:57:18.743627 containerd[1608]: time="2026-01-23T23:57:18.743395918Z" level=warning msg="cleaning up after shim disconnected" id=f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691 namespace=k8s.io Jan 23 23:57:18.743627 containerd[1608]: time="2026-01-23T23:57:18.743417438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:19.081297 sshd[4651]: Accepted publickey for core from 20.161.92.111 port 38760 ssh2: RSA SHA256:3DX+RaKjYaoUtmPV8vgaNOQtcNAuYHgAFzbTULjjOx0 Jan 23 23:57:19.084630 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:19.092388 systemd-logind[1575]: New session 25 of user core. Jan 23 23:57:19.101464 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:57:19.421529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f915ca6f502b1f874d9ebb58f7fca718f2bbbcbf8b804a425a1ad5f2fb476691-rootfs.mount: Deactivated successfully. Jan 23 23:57:19.628345 containerd[1608]: time="2026-01-23T23:57:19.628303528Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:57:19.648245 containerd[1608]: time="2026-01-23T23:57:19.647887947Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e\"" Jan 23 23:57:19.650649 containerd[1608]: time="2026-01-23T23:57:19.650135656Z" level=info msg="StartContainer for \"2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e\"" Jan 23 23:57:19.714744 containerd[1608]: time="2026-01-23T23:57:19.714520187Z" level=info msg="StartContainer for \"2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e\" returns successfully" Jan 23 23:57:19.745032 containerd[1608]: time="2026-01-23T23:57:19.744891988Z" level=info msg="shim disconnected" id=2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e namespace=k8s.io Jan 23 23:57:19.745032 containerd[1608]: time="2026-01-23T23:57:19.744945669Z" level=warning msg="cleaning up after shim disconnected" id=2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e namespace=k8s.io Jan 23 23:57:19.745032 containerd[1608]: time="2026-01-23T23:57:19.744953429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:20.423670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2173522995996077da8e4b6f17b4e102d0de31edb01f225143fe1a82dbe2366e-rootfs.mount: Deactivated successfully. Jan 23 23:57:20.638497 containerd[1608]: time="2026-01-23T23:57:20.638428682Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:57:20.663231 containerd[1608]: time="2026-01-23T23:57:20.663137049Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a\"" Jan 23 23:57:20.666198 containerd[1608]: time="2026-01-23T23:57:20.666161609Z" level=info msg="StartContainer for \"bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a\"" Jan 23 23:57:20.726319 containerd[1608]: time="2026-01-23T23:57:20.726204003Z" level=info msg="StartContainer for \"bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a\" returns successfully" Jan 23 23:57:20.748442 containerd[1608]: time="2026-01-23T23:57:20.748323136Z" level=info msg="shim disconnected" id=bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a namespace=k8s.io Jan 23 23:57:20.748442 containerd[1608]: time="2026-01-23T23:57:20.748384337Z" level=warning msg="cleaning up after shim disconnected" id=bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a namespace=k8s.io Jan 23 23:57:20.748442 containerd[1608]: time="2026-01-23T23:57:20.748393337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:21.423454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd1fefe8d62666d235836045c5247339e9b422c9e860bd39d02068b9d74ce09a-rootfs.mount: Deactivated successfully. Jan 23 23:57:21.646342 containerd[1608]: time="2026-01-23T23:57:21.646293538Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:57:21.661473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201197648.mount: Deactivated successfully. Jan 23 23:57:21.668012 containerd[1608]: time="2026-01-23T23:57:21.667962025Z" level=info msg="CreateContainer within sandbox \"561dc4c3757a6760d68be6af54ac7a95535b0d7a26069ec103ca741353bb45e2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d5842b3bb33655b3b1140a1163474632383fe49ab2001404e67b1cc98d69d7df\"" Jan 23 23:57:21.668696 containerd[1608]: time="2026-01-23T23:57:21.668653314Z" level=info msg="StartContainer for \"d5842b3bb33655b3b1140a1163474632383fe49ab2001404e67b1cc98d69d7df\"" Jan 23 23:57:21.738464 containerd[1608]: time="2026-01-23T23:57:21.735324837Z" level=info msg="StartContainer for \"d5842b3bb33655b3b1140a1163474632383fe49ab2001404e67b1cc98d69d7df\" returns successfully" Jan 23 23:57:21.842566 kubelet[2766]: I0123 23:57:21.842521 2766 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-417febb2dd" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T23:57:21Z","lastTransitionTime":"2026-01-23T23:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 23:57:22.047807 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:57:22.679680 kubelet[2766]: I0123 23:57:22.679590 2766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-76w6c" podStartSLOduration=5.679572622 podStartE2EDuration="5.679572622s" podCreationTimestamp="2026-01-23 23:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:22.673241858 +0000 UTC m=+204.800779867" watchObservedRunningTime="2026-01-23 23:57:22.679572622 +0000 UTC m=+204.807110591" Jan 23 23:57:23.007633 kubelet[2766]: E0123 23:57:23.007439 2766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-mnkd7" podUID="51ec73a2-cfcb-4ba5-bc9c-5cbf0e3aab88" Jan 23 23:57:24.936926 systemd-networkd[1249]: lxc_health: Link UP Jan 23 23:57:24.955375 systemd-networkd[1249]: lxc_health: Gained carrier Jan 23 23:57:25.882533 systemd[1]: run-containerd-runc-k8s.io-d5842b3bb33655b3b1140a1163474632383fe49ab2001404e67b1cc98d69d7df-runc.8cYfJq.mount: Deactivated successfully. Jan 23 23:57:26.793912 systemd-networkd[1249]: lxc_health: Gained IPv6LL Jan 23 23:57:30.385301 sshd[4651]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:30.392002 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:57:30.393151 systemd[1]: sshd@24-49.13.80.198:22-20.161.92.111:38760.service: Deactivated successfully. Jan 23 23:57:30.397224 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:57:30.398427 systemd-logind[1575]: Removed session 25. Jan 23 23:57:45.295192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571-rootfs.mount: Deactivated successfully. Jan 23 23:57:45.312474 containerd[1608]: time="2026-01-23T23:57:45.312374105Z" level=info msg="shim disconnected" id=a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571 namespace=k8s.io Jan 23 23:57:45.312474 containerd[1608]: time="2026-01-23T23:57:45.312474306Z" level=warning msg="cleaning up after shim disconnected" id=a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571 namespace=k8s.io Jan 23 23:57:45.313358 containerd[1608]: time="2026-01-23T23:57:45.312490386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:45.677878 kubelet[2766]: E0123 23:57:45.677827 2766 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34338->10.0.0.2:2379: read: connection timed out" Jan 23 23:57:45.716857 kubelet[2766]: I0123 23:57:45.715787 2766 scope.go:117] "RemoveContainer" containerID="a7f7bf2227c25d9c84e35d3973b0c7b81f2fe989c2e7201929932a9af879c571" Jan 23 23:57:45.721039 containerd[1608]: time="2026-01-23T23:57:45.720915519Z" level=info msg="CreateContainer within sandbox \"b257bd80acc2d3209ef27c25590bcf852a25ca906e65f5d0c1b1de113f275c82\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:57:45.739051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337188948.mount: Deactivated successfully. Jan 23 23:57:45.743484 containerd[1608]: time="2026-01-23T23:57:45.743243100Z" level=info msg="CreateContainer within sandbox \"b257bd80acc2d3209ef27c25590bcf852a25ca906e65f5d0c1b1de113f275c82\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"45d9906eebdec393b2faef06b3dc384a9474ec8e76c7199252bf88998dc8c7f2\"" Jan 23 23:57:45.744846 containerd[1608]: time="2026-01-23T23:57:45.743777147Z" level=info msg="StartContainer for \"45d9906eebdec393b2faef06b3dc384a9474ec8e76c7199252bf88998dc8c7f2\"" Jan 23 23:57:45.811896 containerd[1608]: time="2026-01-23T23:57:45.811056652Z" level=info msg="StartContainer for \"45d9906eebdec393b2faef06b3dc384a9474ec8e76c7199252bf88998dc8c7f2\" returns successfully" Jan 23 23:57:50.090391 kubelet[2766]: E0123 23:57:50.090217 2766 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34148->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-417febb2dd.188d818a4be23a62 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-417febb2dd,UID:6aa46b04a2b2c2073da05c9882626e05,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-417febb2dd,},FirstTimestamp:2026-01-23 23:57:39.605060194 +0000 UTC m=+221.732598203,LastTimestamp:2026-01-23 23:57:39.605060194 +0000 UTC m=+221.732598203,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-417febb2dd,}" Jan 23 23:57:50.841531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79-rootfs.mount: Deactivated successfully. Jan 23 23:57:50.850481 containerd[1608]: time="2026-01-23T23:57:50.850179769Z" level=info msg="shim disconnected" id=75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79 namespace=k8s.io Jan 23 23:57:50.850481 containerd[1608]: time="2026-01-23T23:57:50.850313930Z" level=warning msg="cleaning up after shim disconnected" id=75832f1b39ed9450b42e969726c2fc55d1598975642973b560113f19905e4b79 namespace=k8s.io Jan 23 23:57:50.850481 containerd[1608]: time="2026-01-23T23:57:50.850324811Z" level=info msg="cleaning up dead shim" namespace=k8s.io