Jan 17 00:00:35.889499 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:00:35.889523 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:00:35.889534 kernel: KASLR enabled Jan 17 00:00:35.889540 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:00:35.889546 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 17 00:00:35.889551 kernel: random: crng init done Jan 17 00:00:35.889558 kernel: ACPI: Early table checksum verification disabled Jan 17 00:00:35.889564 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 17 00:00:35.889571 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:00:35.889578 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889584 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889590 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889596 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889602 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889610 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889618 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889624 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889631 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:35.889637 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:00:35.889643 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 17 00:00:35.889649 kernel: NUMA: Failed to initialise from firmware Jan 17 00:00:35.889656 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:00:35.889663 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 17 00:00:35.889669 kernel: Zone ranges: Jan 17 00:00:35.889675 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 00:00:35.889683 kernel: DMA32 empty Jan 17 00:00:35.889689 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 17 00:00:35.889695 kernel: Movable zone start for each node Jan 17 00:00:35.889702 kernel: Early memory node ranges Jan 17 00:00:35.889708 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 17 00:00:35.889715 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 17 00:00:35.889721 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 17 00:00:35.889727 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 17 00:00:35.889734 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 17 00:00:35.889740 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 17 00:00:35.889746 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 17 00:00:35.889753 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:00:35.889760 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:00:35.889767 kernel: psci: probing for conduit method from ACPI. Jan 17 00:00:35.889774 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:00:35.889783 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:00:35.889789 kernel: psci: Trusted OS migration not required Jan 17 00:00:35.889796 kernel: psci: SMC Calling Convention v1.1 Jan 17 00:00:35.889805 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 00:00:35.889811 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:00:35.889818 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:00:35.889825 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:00:35.889831 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:00:35.889838 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:00:35.889845 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:00:35.889852 kernel: CPU features: detected: Spectre-v4 Jan 17 00:00:35.889858 kernel: CPU features: detected: Spectre-BHB Jan 17 00:00:35.889865 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:00:35.889873 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:00:35.889880 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:00:35.889887 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:00:35.889894 kernel: alternatives: applying boot alternatives Jan 17 00:00:35.889926 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:35.889935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:00:35.889942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:00:35.889949 kernel: Fallback order for Node 0: 0 Jan 17 00:00:35.889956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 17 00:00:35.889963 kernel: Policy zone: Normal Jan 17 00:00:35.889969 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:00:35.889978 kernel: software IO TLB: area num 2. Jan 17 00:00:35.889985 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 17 00:00:35.889992 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 17 00:00:35.889999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:00:35.890006 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:00:35.890014 kernel: rcu: RCU event tracing is enabled. Jan 17 00:00:35.890021 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:00:35.890027 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:00:35.890034 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:00:35.890041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:00:35.890048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:00:35.890055 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:00:35.890063 kernel: GICv3: 256 SPIs implemented Jan 17 00:00:35.890070 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:00:35.890076 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:00:35.890083 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 00:00:35.890090 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 00:00:35.890097 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 00:00:35.890103 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 00:00:35.890110 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 00:00:35.890117 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 17 00:00:35.890124 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 17 00:00:35.890131 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:00:35.890139 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:35.890146 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:00:35.890153 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:00:35.890160 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:00:35.890166 kernel: Console: colour dummy device 80x25 Jan 17 00:00:35.890173 kernel: ACPI: Core revision 20230628 Jan 17 00:00:35.890180 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:00:35.890187 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:00:35.890194 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:00:35.890201 kernel: landlock: Up and running. Jan 17 00:00:35.890209 kernel: SELinux: Initializing. Jan 17 00:00:35.890216 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:35.890224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:35.890231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:35.890238 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:35.890245 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:00:35.890252 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:00:35.890259 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 00:00:35.890266 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 00:00:35.890274 kernel: Remapping and enabling EFI services. Jan 17 00:00:35.890281 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:00:35.890288 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:00:35.890295 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 00:00:35.890302 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 17 00:00:35.890309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:35.890316 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:00:35.890323 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:00:35.890330 kernel: SMP: Total of 2 processors activated. Jan 17 00:00:35.890339 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:00:35.890346 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:00:35.890353 kernel: CPU features: detected: Common not Private translations Jan 17 00:00:35.890365 kernel: CPU features: detected: CRC32 instructions Jan 17 00:00:35.890375 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 00:00:35.890382 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:00:35.890389 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:00:35.890406 kernel: CPU features: detected: Privileged Access Never Jan 17 00:00:35.890414 kernel: CPU features: detected: RAS Extension Support Jan 17 00:00:35.890424 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 00:00:35.890431 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:00:35.890438 kernel: alternatives: applying system-wide alternatives Jan 17 00:00:35.890446 kernel: devtmpfs: initialized Jan 17 00:00:35.890453 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:00:35.890460 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:00:35.890468 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:00:35.890475 kernel: SMBIOS 3.0.0 present. Jan 17 00:00:35.890484 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 17 00:00:35.890491 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:00:35.890498 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:00:35.890506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:00:35.890513 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:00:35.890521 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:00:35.890528 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 17 00:00:35.890535 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:00:35.890543 kernel: cpuidle: using governor menu Jan 17 00:00:35.890551 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:00:35.890559 kernel: ASID allocator initialised with 32768 entries Jan 17 00:00:35.890566 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:00:35.890573 kernel: Serial: AMBA PL011 UART driver Jan 17 00:00:35.890581 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:00:35.890588 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:00:35.890595 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:00:35.890603 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:00:35.890610 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:00:35.890619 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:00:35.890626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:00:35.890634 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:00:35.890641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:00:35.890648 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:00:35.890655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:00:35.890663 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:00:35.890670 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:00:35.890677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:00:35.890686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:00:35.890693 kernel: ACPI: Interpreter enabled Jan 17 00:00:35.890701 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:00:35.890708 kernel: ACPI: MCFG table detected, 1 entries Jan 17 00:00:35.890716 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:00:35.890723 kernel: printk: console [ttyAMA0] enabled Jan 17 00:00:35.890731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:00:35.890884 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:00:35.890994 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:00:35.891062 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:00:35.891129 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 00:00:35.891193 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 00:00:35.891203 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 00:00:35.891211 kernel: PCI host bridge to bus 0000:00 Jan 17 00:00:35.891283 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 00:00:35.891345 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 00:00:35.891418 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 00:00:35.891483 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:00:35.891572 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 00:00:35.891648 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 17 00:00:35.891716 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 17 00:00:35.891783 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:00:35.891862 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892041 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 17 00:00:35.892121 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892190 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 17 00:00:35.892263 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892330 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 17 00:00:35.892423 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892492 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 17 00:00:35.892564 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892630 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 17 00:00:35.892702 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892768 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 17 00:00:35.892844 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.892981 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 17 00:00:35.893073 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.893141 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 17 00:00:35.893214 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:35.893281 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 17 00:00:35.893360 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 17 00:00:35.893472 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 17 00:00:35.893554 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:00:35.893626 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 17 00:00:35.893696 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:00:35.893784 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:00:35.893871 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:00:35.893975 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 17 00:00:35.894054 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:00:35.894123 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 17 00:00:35.894191 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 17 00:00:35.894266 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:00:35.894336 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 17 00:00:35.894433 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:00:35.894505 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 17 00:00:35.894574 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 17 00:00:35.895391 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:00:35.895519 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 17 00:00:35.895588 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:00:35.895670 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:00:35.895738 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 17 00:00:35.895805 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 17 00:00:35.895872 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:00:35.895961 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 17 00:00:35.896029 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:00:35.896098 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:00:35.896183 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 17 00:00:35.896251 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 17 00:00:35.896316 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 17 00:00:35.896385 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 00:00:35.896467 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:00:35.896533 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:00:35.896601 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 00:00:35.896666 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 17 00:00:35.896735 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 17 00:00:35.896804 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 00:00:35.896870 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:00:35.898247 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:00:35.898336 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 00:00:35.898451 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:00:35.898531 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:00:35.898611 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:00:35.898677 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:00:35.898740 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:00:35.898809 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:00:35.898875 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:00:35.899063 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:00:35.899141 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:00:35.899223 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:00:35.899287 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:00:35.899355 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 17 00:00:35.899440 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:35.899511 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 17 00:00:35.899575 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:35.899643 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 17 00:00:35.899712 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:35.899779 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 17 00:00:35.899843 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:35.899922 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 17 00:00:35.899989 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:35.900976 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 17 00:00:35.901088 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:35.901165 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 17 00:00:35.901231 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:35.901297 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 17 00:00:35.901362 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:35.901493 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 17 00:00:35.901565 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:35.901637 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 17 00:00:35.901710 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 17 00:00:35.901779 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 17 00:00:35.901846 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:00:35.901931 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 17 00:00:35.902002 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:00:35.902078 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 17 00:00:35.902144 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:00:35.902212 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 17 00:00:35.902281 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 00:00:35.902349 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 17 00:00:35.902426 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 00:00:35.902494 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 17 00:00:35.902560 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 00:00:35.902627 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 17 00:00:35.902693 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 00:00:35.902759 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 17 00:00:35.902828 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 00:00:35.902893 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 17 00:00:35.905235 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 17 00:00:35.905317 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 17 00:00:35.905393 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 17 00:00:35.905492 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:00:35.905561 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 17 00:00:35.905629 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:00:35.905705 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 00:00:35.905770 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 17 00:00:35.905834 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:35.905921 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 17 00:00:35.906006 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:00:35.906074 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 00:00:35.906138 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 17 00:00:35.906203 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:35.906279 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:00:35.906347 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 17 00:00:35.906468 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:00:35.906540 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 00:00:35.906612 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 17 00:00:35.906679 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:35.906754 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:00:35.906824 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:00:35.908479 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 00:00:35.908589 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 17 00:00:35.908658 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:35.908734 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 17 00:00:35.908813 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 17 00:00:35.908882 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:00:35.908983 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 00:00:35.909051 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 17 00:00:35.909115 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:35.909187 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 17 00:00:35.909254 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 17 00:00:35.909320 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:00:35.909391 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 00:00:35.909516 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 17 00:00:35.909592 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:35.909673 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 17 00:00:35.909743 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 17 00:00:35.909812 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 17 00:00:35.909880 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:00:35.910380 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 00:00:35.910496 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 17 00:00:35.910564 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:35.910633 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:00:35.910698 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 00:00:35.910767 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 17 00:00:35.910838 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:35.912326 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:00:35.912479 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 17 00:00:35.912564 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 17 00:00:35.912630 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:35.912697 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 00:00:35.912757 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 00:00:35.912818 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 00:00:35.912889 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 00:00:35.912987 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 17 00:00:35.913053 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:35.913120 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 17 00:00:35.913181 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 17 00:00:35.913241 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:35.913313 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 17 00:00:35.913493 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 17 00:00:35.913564 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:35.913633 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 17 00:00:35.913692 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 17 00:00:35.913765 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:35.913832 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 17 00:00:35.913892 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 17 00:00:35.913981 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:35.914062 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 17 00:00:35.914122 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 17 00:00:35.914185 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:35.914251 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 17 00:00:35.914315 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 17 00:00:35.914376 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:35.914459 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 17 00:00:35.914522 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 17 00:00:35.914583 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:35.914650 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 17 00:00:35.914712 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 17 00:00:35.914776 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:35.914787 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 00:00:35.914795 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 00:00:35.914803 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 00:00:35.914811 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 00:00:35.914819 kernel: iommu: Default domain type: Translated Jan 17 00:00:35.914827 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:00:35.914835 kernel: efivars: Registered efivars operations Jan 17 00:00:35.914844 kernel: vgaarb: loaded Jan 17 00:00:35.914853 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:00:35.914861 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:00:35.914868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:00:35.914876 kernel: pnp: PnP ACPI init Jan 17 00:00:35.915050 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 00:00:35.915065 kernel: pnp: PnP ACPI: found 1 devices Jan 17 00:00:35.915073 kernel: NET: Registered PF_INET protocol family Jan 17 00:00:35.915081 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:00:35.915093 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:00:35.915101 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:00:35.915109 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:00:35.915117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:00:35.915125 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:00:35.915133 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:35.915141 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:35.915148 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:00:35.915225 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 17 00:00:35.915239 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:00:35.915247 kernel: kvm [1]: HYP mode not available Jan 17 00:00:35.915255 kernel: Initialise system trusted keyrings Jan 17 00:00:35.915263 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:00:35.915271 kernel: Key type asymmetric registered Jan 17 00:00:35.915278 kernel: Asymmetric key parser 'x509' registered Jan 17 00:00:35.915286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:00:35.915294 kernel: io scheduler mq-deadline registered Jan 17 00:00:35.915302 kernel: io scheduler kyber registered Jan 17 00:00:35.915311 kernel: io scheduler bfq registered Jan 17 00:00:35.915320 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 00:00:35.915390 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 17 00:00:35.915508 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 17 00:00:35.915576 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.915644 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 17 00:00:35.915711 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 17 00:00:35.915780 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.915852 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 17 00:00:35.915935 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 17 00:00:35.916004 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.916072 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 17 00:00:35.916143 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 17 00:00:35.916209 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.916279 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 17 00:00:35.916345 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 17 00:00:35.916428 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.916502 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 17 00:00:35.916574 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 17 00:00:35.916642 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.916711 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 17 00:00:35.916778 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 17 00:00:35.916844 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.916926 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 17 00:00:35.917001 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 17 00:00:35.917071 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.917082 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 17 00:00:35.917148 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 17 00:00:35.917215 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 17 00:00:35.917282 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:35.917293 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 00:00:35.917303 kernel: ACPI: button: Power Button [PWRB] Jan 17 00:00:35.917311 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 00:00:35.917382 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 17 00:00:35.917504 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 17 00:00:35.917519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:00:35.917527 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 00:00:35.917597 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 17 00:00:35.917608 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 17 00:00:35.917617 kernel: thunder_xcv, ver 1.0 Jan 17 00:00:35.917628 kernel: thunder_bgx, ver 1.0 Jan 17 00:00:35.917636 kernel: nicpf, ver 1.0 Jan 17 00:00:35.917644 kernel: nicvf, ver 1.0 Jan 17 00:00:35.917735 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:00:35.917803 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:00:35 UTC (1768608035) Jan 17 00:00:35.917814 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:00:35.917822 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 00:00:35.917830 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:00:35.917840 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:00:35.917848 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:00:35.917856 kernel: Segment Routing with IPv6 Jan 17 00:00:35.917863 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:00:35.917871 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:00:35.917879 kernel: Key type dns_resolver registered Jan 17 00:00:35.917887 kernel: registered taskstats version 1 Jan 17 00:00:35.917894 kernel: Loading compiled-in X.509 certificates Jan 17 00:00:35.917923 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:00:35.917933 kernel: Key type .fscrypt registered Jan 17 00:00:35.917941 kernel: Key type fscrypt-provisioning registered Jan 17 00:00:35.917961 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:00:35.917970 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:00:35.917978 kernel: ima: No architecture policies found Jan 17 00:00:35.917993 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:00:35.918001 kernel: clk: Disabling unused clocks Jan 17 00:00:35.918009 kernel: Freeing unused kernel memory: 39424K Jan 17 00:00:35.918017 kernel: Run /init as init process Jan 17 00:00:35.918026 kernel: with arguments: Jan 17 00:00:35.918034 kernel: /init Jan 17 00:00:35.918042 kernel: with environment: Jan 17 00:00:35.918049 kernel: HOME=/ Jan 17 00:00:35.918057 kernel: TERM=linux Jan 17 00:00:35.918067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:35.918077 systemd[1]: Detected virtualization kvm. Jan 17 00:00:35.918085 systemd[1]: Detected architecture arm64. Jan 17 00:00:35.918095 systemd[1]: Running in initrd. Jan 17 00:00:35.918103 systemd[1]: No hostname configured, using default hostname. Jan 17 00:00:35.918111 systemd[1]: Hostname set to . Jan 17 00:00:35.918119 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:00:35.918127 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:00:35.918136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:35.918144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:35.918153 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:00:35.918163 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:35.918172 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:00:35.918182 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:00:35.918192 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:00:35.918200 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:00:35.918209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:35.918217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:35.918227 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:35.918236 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:35.918244 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:35.918253 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:35.918261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:35.918269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:35.918278 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:35.918286 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:35.918296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:35.918305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:35.918313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:35.918321 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:35.918330 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:00:35.918338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:35.918346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:00:35.918354 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:00:35.918363 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:35.918373 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:35.918381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:35.918390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:35.918406 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:35.918439 systemd-journald[238]: Collecting audit messages is disabled. Jan 17 00:00:35.918462 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:00:35.918471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:00:35.918479 kernel: Bridge firewalling registered Jan 17 00:00:35.918489 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:35.918498 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:35.918506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:35.918515 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:35.918525 systemd-journald[238]: Journal started Jan 17 00:00:35.918544 systemd-journald[238]: Runtime Journal (/run/log/journal/d08adbb4f4444a149dc376a7232f8214) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:00:35.882809 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 00:00:35.901999 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 00:00:35.921934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:35.926145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:35.939242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:35.939303 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:35.950717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:35.951720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:35.955934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:35.956774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:35.963120 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:00:35.966207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:35.976781 dracut-cmdline[271]: dracut-dracut-053 Jan 17 00:00:35.978682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:35.979474 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:36.010584 systemd-resolved[277]: Positive Trust Anchors: Jan 17 00:00:36.010602 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:36.010636 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:36.016276 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 17 00:00:36.018849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:36.022385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:36.080974 kernel: SCSI subsystem initialized Jan 17 00:00:36.085933 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:00:36.093954 kernel: iscsi: registered transport (tcp) Jan 17 00:00:36.107161 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:00:36.107274 kernel: QLogic iSCSI HBA Driver Jan 17 00:00:36.153592 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:36.161114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:00:36.180235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:00:36.180300 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:00:36.180925 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:00:36.230962 kernel: raid6: neonx8 gen() 15594 MB/s Jan 17 00:00:36.247956 kernel: raid6: neonx4 gen() 15588 MB/s Jan 17 00:00:36.264949 kernel: raid6: neonx2 gen() 13170 MB/s Jan 17 00:00:36.281954 kernel: raid6: neonx1 gen() 10412 MB/s Jan 17 00:00:36.298978 kernel: raid6: int64x8 gen() 6903 MB/s Jan 17 00:00:36.315960 kernel: raid6: int64x4 gen() 7294 MB/s Jan 17 00:00:36.332945 kernel: raid6: int64x2 gen() 6086 MB/s Jan 17 00:00:36.349972 kernel: raid6: int64x1 gen() 5030 MB/s Jan 17 00:00:36.350058 kernel: raid6: using algorithm neonx8 gen() 15594 MB/s Jan 17 00:00:36.366964 kernel: raid6: .... xor() 11803 MB/s, rmw enabled Jan 17 00:00:36.367042 kernel: raid6: using neon recovery algorithm Jan 17 00:00:36.372205 kernel: xor: measuring software checksum speed Jan 17 00:00:36.372251 kernel: 8regs : 18718 MB/sec Jan 17 00:00:36.372273 kernel: 32regs : 19660 MB/sec Jan 17 00:00:36.372946 kernel: arm64_neon : 27061 MB/sec Jan 17 00:00:36.372980 kernel: xor: using function: arm64_neon (27061 MB/sec) Jan 17 00:00:36.423041 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:00:36.436783 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:36.442192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:36.469409 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 17 00:00:36.472964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:36.484150 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:00:36.505497 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 17 00:00:36.542576 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:36.549159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:36.601353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:36.610134 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:00:36.629670 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:36.633110 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:36.635358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:36.637177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:36.643115 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:00:36.668490 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:36.716912 kernel: ACPI: bus type USB registered Jan 17 00:00:36.716975 kernel: usbcore: registered new interface driver usbfs Jan 17 00:00:36.716986 kernel: usbcore: registered new interface driver hub Jan 17 00:00:36.720879 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:00:36.723928 kernel: usbcore: registered new device driver usb Jan 17 00:00:36.726929 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:00:36.727780 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:00:36.738830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:36.740914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:36.743016 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:36.746869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:36.747630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:36.749709 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:36.754933 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:00:36.755115 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:00:36.756952 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:00:36.760295 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:00:36.760494 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:00:36.761163 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:00:36.761427 kernel: hub 1-0:1.0: USB hub found Jan 17 00:00:36.763087 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:00:36.763224 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:00:36.762197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:36.766925 kernel: hub 2-0:1.0: USB hub found Jan 17 00:00:36.769942 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:00:36.772448 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 17 00:00:36.777927 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 17 00:00:36.778146 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:00:36.778159 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:00:36.778283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:36.784126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:36.789984 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 17 00:00:36.790206 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 17 00:00:36.791414 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 17 00:00:36.791846 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 17 00:00:36.792029 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:00:36.796244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:00:36.796294 kernel: GPT:17805311 != 80003071 Jan 17 00:00:36.796306 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:00:36.796326 kernel: GPT:17805311 != 80003071 Jan 17 00:00:36.796336 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:00:36.796346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:36.797008 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 17 00:00:36.826090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:36.841984 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (511) Jan 17 00:00:36.843940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:00:36.856559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:00:36.861922 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (521) Jan 17 00:00:36.867121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:00:36.877894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:00:36.879697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:00:36.890101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:00:36.898935 disk-uuid[573]: Primary Header is updated. Jan 17 00:00:36.898935 disk-uuid[573]: Secondary Entries is updated. Jan 17 00:00:36.898935 disk-uuid[573]: Secondary Header is updated. Jan 17 00:00:36.906941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:36.912931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:37.000960 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:00:37.138015 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 17 00:00:37.138108 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:00:37.138520 kernel: usbcore: registered new interface driver usbhid Jan 17 00:00:37.138547 kernel: usbhid: USB HID core driver Jan 17 00:00:37.243465 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 17 00:00:37.371933 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 17 00:00:37.424973 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 17 00:00:37.916931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:37.917367 disk-uuid[574]: The operation has completed successfully. Jan 17 00:00:37.968293 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:00:37.968433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:00:37.984226 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:00:37.990309 sh[588]: Success Jan 17 00:00:38.007129 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:00:38.060967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:00:38.069060 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:00:38.073172 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:00:38.098150 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:00:38.098232 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:38.098260 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:00:38.099098 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:00:38.099142 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:00:38.105946 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:00:38.108138 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:00:38.110859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:00:38.121136 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:00:38.125130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:00:38.139793 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:38.139847 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:38.139858 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:38.144942 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:00:38.145001 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:38.154638 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:00:38.155479 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:38.163092 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:00:38.168137 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:00:38.240961 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:38.248458 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:38.272818 ignition[681]: Ignition 2.19.0 Jan 17 00:00:38.273500 ignition[681]: Stage: fetch-offline Jan 17 00:00:38.273999 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:38.274174 systemd-networkd[776]: lo: Link UP Jan 17 00:00:38.274177 systemd-networkd[776]: lo: Gained carrier Jan 17 00:00:38.275818 systemd-networkd[776]: Enumeration completed Jan 17 00:00:38.275710 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:38.275943 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:38.275888 ignition[681]: parsed url from cmdline: "" Jan 17 00:00:38.278257 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:38.275891 ignition[681]: no config URL provided Jan 17 00:00:38.279265 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:38.275896 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:38.279268 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:38.275917 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:38.280093 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:38.275923 ignition[681]: failed to fetch config: resource requires networking Jan 17 00:00:38.280096 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:38.276162 ignition[681]: Ignition finished successfully Jan 17 00:00:38.280545 systemd[1]: Reached target network.target - Network. Jan 17 00:00:38.280624 systemd-networkd[776]: eth0: Link UP Jan 17 00:00:38.280627 systemd-networkd[776]: eth0: Gained carrier Jan 17 00:00:38.280635 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:38.284182 systemd-networkd[776]: eth1: Link UP Jan 17 00:00:38.284185 systemd-networkd[776]: eth1: Gained carrier Jan 17 00:00:38.284194 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:38.289117 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:00:38.303808 ignition[779]: Ignition 2.19.0 Jan 17 00:00:38.303818 ignition[779]: Stage: fetch Jan 17 00:00:38.304007 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:38.304017 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:38.304107 ignition[779]: parsed url from cmdline: "" Jan 17 00:00:38.304110 ignition[779]: no config URL provided Jan 17 00:00:38.304114 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:38.304121 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:38.304141 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:00:38.304789 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:00:38.323020 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:00:38.340025 systemd-networkd[776]: eth0: DHCPv4 address 188.245.80.168/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:00:38.505740 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:00:38.512031 ignition[779]: GET result: OK Jan 17 00:00:38.512180 ignition[779]: parsing config with SHA512: d975d0eecad1579320db384b2da1b3ece181f82b58e8091b6ff9b1f1d8a3a40300e240b9eee992f2d83d2dd63af1e63a337bf378c77ccee3c534636df30773d4 Jan 17 00:00:38.517379 unknown[779]: fetched base config from "system" Jan 17 00:00:38.517391 unknown[779]: fetched base config from "system" Jan 17 00:00:38.517848 ignition[779]: fetch: fetch complete Jan 17 00:00:38.517397 unknown[779]: fetched user config from "hetzner" Jan 17 00:00:38.517854 ignition[779]: fetch: fetch passed Jan 17 00:00:38.521997 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:00:38.518017 ignition[779]: Ignition finished successfully Jan 17 00:00:38.528102 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:00:38.541900 ignition[787]: Ignition 2.19.0 Jan 17 00:00:38.541925 ignition[787]: Stage: kargs Jan 17 00:00:38.542102 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:38.542112 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:38.543024 ignition[787]: kargs: kargs passed Jan 17 00:00:38.543076 ignition[787]: Ignition finished successfully Jan 17 00:00:38.546987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:00:38.556646 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:00:38.571022 ignition[794]: Ignition 2.19.0 Jan 17 00:00:38.571032 ignition[794]: Stage: disks Jan 17 00:00:38.571212 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:38.571222 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:38.576013 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:00:38.572235 ignition[794]: disks: disks passed Jan 17 00:00:38.577161 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:38.572290 ignition[794]: Ignition finished successfully Jan 17 00:00:38.578501 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:38.580241 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:38.581283 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:38.582732 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:38.590071 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:00:38.606892 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:00:38.611561 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:00:38.618165 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:00:38.667931 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:00:38.669136 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:00:38.670308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:00:38.681413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:38.685078 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:00:38.688173 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:00:38.691033 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:00:38.691079 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:38.704292 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Jan 17 00:00:38.704356 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:38.704407 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:38.704429 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:38.708840 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:00:38.717213 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:00:38.717311 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:38.719276 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:00:38.727324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:38.758337 coreos-metadata[812]: Jan 17 00:00:38.758 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:00:38.761352 coreos-metadata[812]: Jan 17 00:00:38.759 INFO Fetch successful Jan 17 00:00:38.761352 coreos-metadata[812]: Jan 17 00:00:38.760 INFO wrote hostname ci-4081-3-6-n-5d990e87a1 to /sysroot/etc/hostname Jan 17 00:00:38.763143 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:38.765779 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:00:38.772977 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:00:38.778132 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:00:38.782163 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:00:38.887689 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:38.898116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:00:38.904227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:00:38.910953 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:38.936396 ignition[927]: INFO : Ignition 2.19.0 Jan 17 00:00:38.936396 ignition[927]: INFO : Stage: mount Jan 17 00:00:38.936396 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:38.936396 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:38.941087 ignition[927]: INFO : mount: mount passed Jan 17 00:00:38.941087 ignition[927]: INFO : Ignition finished successfully Jan 17 00:00:38.943119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:00:38.945073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:00:38.953458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:00:39.099920 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:00:39.114965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:39.124950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Jan 17 00:00:39.127208 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:39.127276 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:39.127300 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:39.131995 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:00:39.132100 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:39.135663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:39.162035 ignition[958]: INFO : Ignition 2.19.0 Jan 17 00:00:39.162794 ignition[958]: INFO : Stage: files Jan 17 00:00:39.163559 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:39.165512 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:39.165512 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:00:39.167951 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:00:39.169033 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:00:39.172852 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:00:39.173861 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:00:39.176014 unknown[958]: wrote ssh authorized keys file for user: core Jan 17 00:00:39.178173 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:00:39.179596 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:39.181298 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:00:39.291027 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:00:39.379058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:39.380791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:00:39.380791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:00:39.397392 systemd-networkd[776]: eth1: Gained IPv6LL Jan 17 00:00:39.686317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:39.862657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:39.871834 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:00:40.116973 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:00:40.165435 systemd-networkd[776]: eth0: Gained IPv6LL Jan 17 00:00:40.487968 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:40.487968 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:40.491306 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:40.491306 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:40.491306 ignition[958]: INFO : files: files passed Jan 17 00:00:40.491306 ignition[958]: INFO : Ignition finished successfully Jan 17 00:00:40.492657 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:00:40.501105 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:00:40.502486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:00:40.511410 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:00:40.511633 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:00:40.523773 initrd-setup-root-after-ignition[987]: grep: Jan 17 00:00:40.524541 initrd-setup-root-after-ignition[991]: grep: Jan 17 00:00:40.524541 initrd-setup-root-after-ignition[987]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:40.525846 initrd-setup-root-after-ignition[991]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:40.527061 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:40.532086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:40.533235 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:00:40.546292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:00:40.594818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:00:40.594983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:00:40.597026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:00:40.600107 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:00:40.601240 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:00:40.606214 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:00:40.624944 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:40.637476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:00:40.649046 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:40.650697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:40.652439 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:00:40.653391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:00:40.653587 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:40.655309 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:00:40.656735 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:00:40.657832 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:00:40.658988 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:40.660323 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:40.661640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:00:40.662813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:40.664125 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:00:40.665384 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:00:40.666578 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:00:40.667491 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:00:40.667628 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:40.669052 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:40.669744 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:40.671033 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:00:40.671120 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:40.672457 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:00:40.672587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:40.674153 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:00:40.674277 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:40.675720 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:00:40.675820 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:00:40.676738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:00:40.676839 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:40.682185 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:00:40.683264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:00:40.683418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:40.689170 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:00:40.689803 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:00:40.690097 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:40.692550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:00:40.693017 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:40.704956 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:00:40.705083 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:00:40.709006 ignition[1011]: INFO : Ignition 2.19.0 Jan 17 00:00:40.709006 ignition[1011]: INFO : Stage: umount Jan 17 00:00:40.709006 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:40.709006 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:40.712283 ignition[1011]: INFO : umount: umount passed Jan 17 00:00:40.712895 ignition[1011]: INFO : Ignition finished successfully Jan 17 00:00:40.714842 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:00:40.715992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:00:40.720126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:00:40.721814 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:00:40.722747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:00:40.723666 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:00:40.723715 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:00:40.724379 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:00:40.724422 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:00:40.726038 systemd[1]: Stopped target network.target - Network. Jan 17 00:00:40.726570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:00:40.726632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:40.727657 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:00:40.728555 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:00:40.732980 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:40.734828 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:00:40.735612 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:00:40.736695 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:00:40.736752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:40.737828 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:00:40.737878 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:40.738874 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:00:40.738951 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:00:40.740615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:00:40.740661 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:40.742521 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:00:40.743525 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:00:40.744315 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 17 00:00:40.744485 systemd-networkd[776]: eth1: DHCPv6 lease lost Jan 17 00:00:40.745608 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:00:40.745694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:00:40.748736 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:00:40.748854 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:00:40.750078 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:00:40.750170 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:00:40.754396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:00:40.754455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:40.755707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:00:40.755759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:40.762062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:00:40.763979 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:00:40.764053 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:40.764817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:00:40.764862 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:40.765849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:00:40.765892 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:40.767164 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:00:40.767209 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:40.769007 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:40.787432 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:00:40.787603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:40.790398 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:00:40.790551 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:00:40.792232 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:00:40.792301 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:40.793438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:00:40.793473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:40.794525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:00:40.794572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:40.796276 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:00:40.796319 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:40.797861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:40.797954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:40.814729 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:00:40.816234 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:00:40.816337 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:40.817949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:40.818028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:40.827275 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:00:40.827404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:00:40.828897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:00:40.842287 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:00:40.855069 systemd[1]: Switching root. Jan 17 00:00:40.895699 systemd-journald[238]: Journal stopped Jan 17 00:00:41.854096 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 17 00:00:41.854225 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:00:41.854258 kernel: SELinux: policy capability open_perms=1 Jan 17 00:00:41.854286 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:00:41.854313 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:00:41.854363 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:00:41.854397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:00:41.854424 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:00:41.854450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:00:41.854473 kernel: audit: type=1403 audit(1768608041.076:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:00:41.854485 systemd[1]: Successfully loaded SELinux policy in 35.352ms. Jan 17 00:00:41.854514 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.684ms. Jan 17 00:00:41.854527 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:41.854540 systemd[1]: Detected virtualization kvm. Jan 17 00:00:41.854554 systemd[1]: Detected architecture arm64. Jan 17 00:00:41.854567 systemd[1]: Detected first boot. Jan 17 00:00:41.854579 systemd[1]: Hostname set to . Jan 17 00:00:41.854590 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:00:41.854602 zram_generator::config[1054]: No configuration found. Jan 17 00:00:41.854615 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:00:41.854627 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:00:41.854640 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:00:41.854652 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:00:41.854726 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:00:41.854747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:00:41.854761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:00:41.854773 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:00:41.854789 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:00:41.854802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:00:41.854814 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:00:41.854828 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:00:41.854845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:41.854857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:41.854869 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:00:41.854881 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:00:41.854893 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:00:41.854916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:41.854928 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:00:41.854940 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:41.854954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:00:41.854966 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:00:41.854978 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:00:41.854989 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:00:41.855006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:41.855018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:41.855032 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:41.855044 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:41.855058 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:00:41.855070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:00:41.855081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:41.855094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:41.855105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:41.855117 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:00:41.855149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:00:41.855166 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:00:41.855178 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:00:41.855189 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:00:41.855201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:00:41.855213 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:00:41.855226 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:00:41.855242 systemd[1]: Reached target machines.target - Containers. Jan 17 00:00:41.855255 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:00:41.855268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:41.855280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:41.855292 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:00:41.855305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:41.855316 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:41.855355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:41.855378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:00:41.855392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:41.855405 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:00:41.855418 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:00:41.855444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:00:41.855456 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:00:41.855468 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:00:41.855480 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:41.855492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:41.855506 kernel: fuse: init (API version 7.39) Jan 17 00:00:41.855519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:00:41.855531 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:00:41.855542 kernel: ACPI: bus type drm_connector registered Jan 17 00:00:41.855557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:41.855569 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:00:41.855580 systemd[1]: Stopped verity-setup.service. Jan 17 00:00:41.855591 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:00:41.855602 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:00:41.855613 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:00:41.855623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:00:41.855633 kernel: loop: module loaded Jan 17 00:00:41.855643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:00:41.855655 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:00:41.855666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:41.855676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:00:41.855687 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:00:41.855736 systemd-journald[1121]: Collecting audit messages is disabled. Jan 17 00:00:41.855768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:41.855780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:41.855790 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:41.855803 systemd-journald[1121]: Journal started Jan 17 00:00:41.855825 systemd-journald[1121]: Runtime Journal (/run/log/journal/d08adbb4f4444a149dc376a7232f8214) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:00:41.588095 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:00:41.609495 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:00:41.609930 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:00:41.860079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:41.861961 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:41.863698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:41.863894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:41.867354 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:00:41.867553 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:00:41.868847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:41.869040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:41.870080 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:41.871312 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:00:41.872540 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:00:41.890148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:00:41.898154 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:00:41.907033 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:00:41.910030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:00:41.910072 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:41.913478 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:00:41.918200 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:00:41.928855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:00:41.930449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:41.935721 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:00:41.939401 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:00:41.940122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:41.942168 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:00:41.945054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:41.946217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:41.951377 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:00:41.956965 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:00:41.958368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:00:41.961516 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:00:41.964941 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:00:41.981799 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:00:41.993790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:42.004666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:00:42.007464 systemd-journald[1121]: Time spent on flushing to /var/log/journal/d08adbb4f4444a149dc376a7232f8214 is 42.519ms for 1131 entries. Jan 17 00:00:42.007464 systemd-journald[1121]: System Journal (/var/log/journal/d08adbb4f4444a149dc376a7232f8214) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:00:42.071318 kernel: loop0: detected capacity change from 0 to 207008 Jan 17 00:00:42.071390 systemd-journald[1121]: Received client request to flush runtime journal. Jan 17 00:00:42.071432 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:00:42.014170 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:00:42.015029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:00:42.018866 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:00:42.051889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:42.057014 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:00:42.076666 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:00:42.089974 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 00:00:42.097031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:00:42.101112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:00:42.114278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:42.115762 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:00:42.137933 kernel: loop2: detected capacity change from 0 to 8 Jan 17 00:00:42.152682 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 00:00:42.152706 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 00:00:42.162311 kernel: loop3: detected capacity change from 0 to 114328 Jan 17 00:00:42.169373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:42.200925 kernel: loop4: detected capacity change from 0 to 207008 Jan 17 00:00:42.220943 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 00:00:42.236966 kernel: loop6: detected capacity change from 0 to 8 Jan 17 00:00:42.237971 kernel: loop7: detected capacity change from 0 to 114328 Jan 17 00:00:42.251552 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:00:42.252789 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 17 00:00:42.261150 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:00:42.261168 systemd[1]: Reloading... Jan 17 00:00:42.398928 zram_generator::config[1217]: No configuration found. Jan 17 00:00:42.443686 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:00:42.540430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:42.597058 systemd[1]: Reloading finished in 335 ms. Jan 17 00:00:42.643612 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:00:42.647598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:00:42.653157 systemd[1]: Starting ensure-sysext.service... Jan 17 00:00:42.657177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:42.678087 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:00:42.678108 systemd[1]: Reloading... Jan 17 00:00:42.710291 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:00:42.710633 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:00:42.711315 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:00:42.711899 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:00:42.714052 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:00:42.718288 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:42.718302 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:00:42.733458 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:42.733472 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:00:42.779971 zram_generator::config[1286]: No configuration found. Jan 17 00:00:42.891942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:42.949176 systemd[1]: Reloading finished in 270 ms. Jan 17 00:00:42.970518 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:00:42.977680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:42.993503 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:43.000254 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:00:43.006266 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:00:43.010178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:43.016296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:43.020370 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:00:43.025525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:43.032307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:43.038227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:43.043271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:43.044022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:43.046193 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:00:43.048727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:43.050305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:43.067197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:43.071294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:43.072014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:43.074982 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:00:43.080391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:43.083151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:43.084034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:43.089267 systemd[1]: Finished ensure-sysext.service. Jan 17 00:00:43.094050 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:00:43.100645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:43.101527 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:43.102846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:43.104643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:43.104793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:43.109175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:43.109439 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:43.114981 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 17 00:00:43.117410 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:00:43.120391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:43.120545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:43.122352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:43.129168 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:00:43.146419 augenrules[1362]: No rules Jan 17 00:00:43.147251 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:43.156625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:43.167199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:43.168326 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:00:43.169770 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:00:43.176914 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:00:43.178372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:00:43.273757 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:00:43.274643 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:00:43.277209 systemd-networkd[1372]: lo: Link UP Jan 17 00:00:43.277218 systemd-networkd[1372]: lo: Gained carrier Jan 17 00:00:43.277812 systemd-networkd[1372]: Enumeration completed Jan 17 00:00:43.277939 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:43.280385 systemd-timesyncd[1352]: No network connectivity, watching for changes. Jan 17 00:00:43.289196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:00:43.309102 systemd-resolved[1335]: Positive Trust Anchors: Jan 17 00:00:43.309119 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:43.309151 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:43.316499 systemd-resolved[1335]: Using system hostname 'ci-4081-3-6-n-5d990e87a1'. Jan 17 00:00:43.320346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:43.321265 systemd[1]: Reached target network.target - Network. Jan 17 00:00:43.322253 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:43.326985 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 00:00:43.411085 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:43.411094 systemd-networkd[1372]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:43.411753 systemd-networkd[1372]: eth1: Link UP Jan 17 00:00:43.411757 systemd-networkd[1372]: eth1: Gained carrier Jan 17 00:00:43.411769 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:43.415012 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:00:43.417090 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:43.417592 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:43.418266 systemd-networkd[1372]: eth0: Link UP Jan 17 00:00:43.419414 systemd-networkd[1372]: eth0: Gained carrier Jan 17 00:00:43.419435 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:43.448954 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1378) Jan 17 00:00:43.460309 systemd-networkd[1372]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:00:43.462473 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 17 00:00:43.470025 systemd-networkd[1372]: eth0: DHCPv4 address 188.245.80.168/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:00:43.470388 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 17 00:00:43.470548 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 17 00:00:43.474623 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 00:00:43.474772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:43.486141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:43.488569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:43.495162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:43.495848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:43.495890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:00:43.496267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:43.496464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:43.501474 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 17 00:00:43.501562 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:00:43.501579 kernel: [drm] features: -context_init Jan 17 00:00:43.502924 kernel: [drm] number of scanouts: 1 Jan 17 00:00:43.502991 kernel: [drm] number of cap sets: 0 Jan 17 00:00:43.506933 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:00:43.517327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:00:43.520917 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:00:43.530674 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:00:43.532114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:43.532416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:43.533937 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:43.534386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:43.543152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:00:43.544430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:43.544502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:43.573950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:00:43.595643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:43.652088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:43.731015 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:00:43.739303 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:00:43.751952 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:43.782986 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:00:43.784114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:43.784840 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:43.785985 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:00:43.786780 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:00:43.787893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:00:43.788767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:00:43.789588 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:00:43.790379 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:00:43.790415 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:43.790909 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:43.792233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:00:43.794378 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:00:43.801396 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:00:43.803969 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:00:43.805494 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:00:43.806472 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:43.807209 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:43.807973 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:43.808004 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:43.811047 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:00:43.813974 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:43.816128 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:00:43.825105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:00:43.830085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:00:43.834634 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:00:43.835517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:00:43.837123 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:00:43.841049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:00:43.845990 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:00:43.851140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:00:43.851798 jq[1446]: false Jan 17 00:00:43.854085 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:00:43.861080 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:00:43.862726 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:00:43.865287 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:00:43.868399 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:00:43.872071 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:00:43.874986 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:00:43.881370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:00:43.881576 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:00:43.886920 jq[1458]: true Jan 17 00:00:43.900234 coreos-metadata[1444]: Jan 17 00:00:43.899 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:00:43.900038 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:00:43.899783 dbus-daemon[1445]: [system] SELinux support is enabled Jan 17 00:00:43.903922 coreos-metadata[1444]: Jan 17 00:00:43.901 INFO Fetch successful Jan 17 00:00:43.903922 coreos-metadata[1444]: Jan 17 00:00:43.901 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:00:43.903922 coreos-metadata[1444]: Jan 17 00:00:43.901 INFO Fetch successful Jan 17 00:00:43.903979 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:00:43.904007 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:00:43.905870 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:00:43.905899 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:00:43.909017 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:00:43.909182 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:00:43.910998 jq[1470]: true Jan 17 00:00:43.952039 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:00:43.953974 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:00:43.967731 extend-filesystems[1447]: Found loop4 Jan 17 00:00:43.969010 extend-filesystems[1447]: Found loop5 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found loop6 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found loop7 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found sda Jan 17 00:00:43.970015 extend-filesystems[1447]: Found sda1 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found sda2 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found sda3 Jan 17 00:00:43.970015 extend-filesystems[1447]: Found usr Jan 17 00:00:43.980807 extend-filesystems[1447]: Found sda4 Jan 17 00:00:43.980807 extend-filesystems[1447]: Found sda6 Jan 17 00:00:43.980807 extend-filesystems[1447]: Found sda7 Jan 17 00:00:43.980807 extend-filesystems[1447]: Found sda9 Jan 17 00:00:43.980807 extend-filesystems[1447]: Checking size of /dev/sda9 Jan 17 00:00:43.973485 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:00:43.985848 tar[1477]: linux-arm64/LICENSE Jan 17 00:00:43.985848 tar[1477]: linux-arm64/helm Jan 17 00:00:44.018253 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:00:44.024352 extend-filesystems[1447]: Resized partition /dev/sda9 Jan 17 00:00:44.020054 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:00:44.025810 update_engine[1457]: I20260117 00:00:44.025121 1457 main.cc:92] Flatcar Update Engine starting Jan 17 00:00:44.039214 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 17 00:00:44.039301 update_engine[1457]: I20260117 00:00:44.027617 1457 update_check_scheduler.cc:74] Next update check in 11m29s Jan 17 00:00:44.039393 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:00:44.027023 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:00:44.041301 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:00:44.101980 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1388) Jan 17 00:00:44.108931 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:00:44.105963 systemd-logind[1456]: New seat seat0. Jan 17 00:00:44.107473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:00:44.112602 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:00:44.115753 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 17 00:00:44.146124 systemd[1]: Starting sshkeys.service... Jan 17 00:00:44.147363 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:00:44.185145 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:00:44.198940 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 17 00:00:44.224924 extend-filesystems[1511]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:00:44.224924 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 17 00:00:44.224924 extend-filesystems[1511]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 17 00:00:44.226999 extend-filesystems[1447]: Resized filesystem in /dev/sda9 Jan 17 00:00:44.226999 extend-filesystems[1447]: Found sr0 Jan 17 00:00:44.232510 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:00:44.233692 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:00:44.233896 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:00:44.270899 coreos-metadata[1523]: Jan 17 00:00:44.270 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:00:44.271292 coreos-metadata[1523]: Jan 17 00:00:44.271 INFO Fetch successful Jan 17 00:00:44.276012 unknown[1523]: wrote ssh authorized keys file for user: core Jan 17 00:00:44.311316 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:00:44.313955 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:00:44.319980 systemd[1]: Finished sshkeys.service. Jan 17 00:00:44.323021 containerd[1486]: time="2026-01-17T00:00:44.321164680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:00:44.329840 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:00:44.385192 containerd[1486]: time="2026-01-17T00:00:44.384346800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395196 containerd[1486]: time="2026-01-17T00:00:44.395130480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395196 containerd[1486]: time="2026-01-17T00:00:44.395188320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:00:44.395329 containerd[1486]: time="2026-01-17T00:00:44.395210960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:00:44.395434 containerd[1486]: time="2026-01-17T00:00:44.395410800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:00:44.395489 containerd[1486]: time="2026-01-17T00:00:44.395437600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395522 containerd[1486]: time="2026-01-17T00:00:44.395503040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395545 containerd[1486]: time="2026-01-17T00:00:44.395520400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395717 containerd[1486]: time="2026-01-17T00:00:44.395694800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395740 containerd[1486]: time="2026-01-17T00:00:44.395715760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395740 containerd[1486]: time="2026-01-17T00:00:44.395730080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395779 containerd[1486]: time="2026-01-17T00:00:44.395739720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.395841 containerd[1486]: time="2026-01-17T00:00:44.395823520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.396068 containerd[1486]: time="2026-01-17T00:00:44.396047480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:44.396178 containerd[1486]: time="2026-01-17T00:00:44.396159000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:44.396204 containerd[1486]: time="2026-01-17T00:00:44.396177280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:00:44.396273 containerd[1486]: time="2026-01-17T00:00:44.396257440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:00:44.396336 containerd[1486]: time="2026-01-17T00:00:44.396314120Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.403670960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.403742560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.403759440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.403776560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.403791560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.404709160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405042800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405188600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405206560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405219560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405233800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405251160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405265320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.405908 containerd[1486]: time="2026-01-17T00:00:44.405284480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405338240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405355320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405369600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405381200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405406160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405419800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405432440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405453800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405465480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405479440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405491400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405504160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405517320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406179 containerd[1486]: time="2026-01-17T00:00:44.405536960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405548840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405562040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405573960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405590160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405610800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405622600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.406434 containerd[1486]: time="2026-01-17T00:00:44.405633360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:00:44.406573 containerd[1486]: time="2026-01-17T00:00:44.406542240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:00:44.406953 containerd[1486]: time="2026-01-17T00:00:44.406927360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:00:44.406980 containerd[1486]: time="2026-01-17T00:00:44.406954120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:00:44.407009 containerd[1486]: time="2026-01-17T00:00:44.406980040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:00:44.407009 containerd[1486]: time="2026-01-17T00:00:44.406991160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.407045 containerd[1486]: time="2026-01-17T00:00:44.407007680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:00:44.407045 containerd[1486]: time="2026-01-17T00:00:44.407018640Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:00:44.407045 containerd[1486]: time="2026-01-17T00:00:44.407029280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:00:44.407489 containerd[1486]: time="2026-01-17T00:00:44.407425480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:00:44.407616 containerd[1486]: time="2026-01-17T00:00:44.407496200Z" level=info msg="Connect containerd service" Jan 17 00:00:44.407616 containerd[1486]: time="2026-01-17T00:00:44.407533440Z" level=info msg="using legacy CRI server" Jan 17 00:00:44.407616 containerd[1486]: time="2026-01-17T00:00:44.407541000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:00:44.407672 containerd[1486]: time="2026-01-17T00:00:44.407639000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:00:44.409785 containerd[1486]: time="2026-01-17T00:00:44.409750800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:00:44.410616 containerd[1486]: time="2026-01-17T00:00:44.410573120Z" level=info msg="Start subscribing containerd event" Jan 17 00:00:44.410663 containerd[1486]: time="2026-01-17T00:00:44.410635200Z" level=info msg="Start recovering state" Jan 17 00:00:44.410729 containerd[1486]: time="2026-01-17T00:00:44.410712120Z" level=info msg="Start event monitor" Jan 17 00:00:44.410753 containerd[1486]: time="2026-01-17T00:00:44.410729280Z" level=info msg="Start snapshots syncer" Jan 17 00:00:44.410753 containerd[1486]: time="2026-01-17T00:00:44.410739520Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:00:44.410753 containerd[1486]: time="2026-01-17T00:00:44.410748640Z" level=info msg="Start streaming server" Jan 17 00:00:44.412253 containerd[1486]: time="2026-01-17T00:00:44.412226680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:00:44.412295 containerd[1486]: time="2026-01-17T00:00:44.412281720Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:00:44.414354 containerd[1486]: time="2026-01-17T00:00:44.413244160Z" level=info msg="containerd successfully booted in 0.093361s" Jan 17 00:00:44.413385 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:00:44.453072 systemd-networkd[1372]: eth1: Gained IPv6LL Jan 17 00:00:44.453633 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 17 00:00:44.459562 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:00:44.460811 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:00:44.469119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:44.479624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:00:44.519642 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:00:44.929978 tar[1477]: linux-arm64/README.md Jan 17 00:00:44.951968 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:00:45.286988 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 17 00:00:45.287665 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 17 00:00:45.375086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:45.386511 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:45.882217 kubelet[1559]: E0117 00:00:45.882089 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:45.885590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:45.886380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:45.963737 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:00:45.988565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:00:46.001472 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:00:46.009946 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:00:46.010199 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:00:46.018724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:00:46.031031 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:00:46.039412 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:00:46.042559 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:00:46.044351 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:00:46.045162 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:00:46.045850 systemd[1]: Startup finished in 774ms (kernel) + 5.381s (initrd) + 5.003s (userspace) = 11.159s. Jan 17 00:00:55.918620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:00:55.932306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:56.049145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:56.062561 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:56.111928 kubelet[1595]: E0117 00:00:56.111855 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:56.116628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:56.116973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:06.168753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:01:06.180333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:06.343220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:06.343299 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:06.387535 kubelet[1611]: E0117 00:01:06.387446 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:06.390522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:06.390719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:15.534936 systemd-timesyncd[1352]: Contacted time server 217.144.138.234:123 (2.flatcar.pool.ntp.org). Jan 17 00:01:15.535055 systemd-timesyncd[1352]: Initial clock synchronization to Sat 2026-01-17 00:01:15.894283 UTC. Jan 17 00:01:16.420934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:01:16.428249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:16.553303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:16.558007 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:16.599402 kubelet[1626]: E0117 00:01:16.599333 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:16.602644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:16.603016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:19.612146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:01:19.622767 systemd[1]: Started sshd@0-188.245.80.168:22-4.153.228.146:34696.service - OpenSSH per-connection server daemon (4.153.228.146:34696). Jan 17 00:01:20.270408 sshd[1633]: Accepted publickey for core from 4.153.228.146 port 34696 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:20.273217 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:20.283460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:01:20.289444 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:01:20.293327 systemd-logind[1456]: New session 1 of user core. Jan 17 00:01:20.305682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:01:20.314402 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:01:20.318061 (systemd)[1637]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:01:20.421858 systemd[1637]: Queued start job for default target default.target. Jan 17 00:01:20.432075 systemd[1637]: Created slice app.slice - User Application Slice. Jan 17 00:01:20.432383 systemd[1637]: Reached target paths.target - Paths. Jan 17 00:01:20.432420 systemd[1637]: Reached target timers.target - Timers. Jan 17 00:01:20.435095 systemd[1637]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:01:20.450550 systemd[1637]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:01:20.450677 systemd[1637]: Reached target sockets.target - Sockets. Jan 17 00:01:20.450691 systemd[1637]: Reached target basic.target - Basic System. Jan 17 00:01:20.450737 systemd[1637]: Reached target default.target - Main User Target. Jan 17 00:01:20.450765 systemd[1637]: Startup finished in 125ms. Jan 17 00:01:20.450909 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:01:20.459559 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:01:20.933248 systemd[1]: Started sshd@1-188.245.80.168:22-4.153.228.146:34706.service - OpenSSH per-connection server daemon (4.153.228.146:34706). Jan 17 00:01:21.551025 sshd[1648]: Accepted publickey for core from 4.153.228.146 port 34706 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:21.553451 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:21.558417 systemd-logind[1456]: New session 2 of user core. Jan 17 00:01:21.566520 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:01:21.992632 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:21.998838 systemd[1]: sshd@1-188.245.80.168:22-4.153.228.146:34706.service: Deactivated successfully. Jan 17 00:01:22.001612 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:01:22.004299 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:01:22.005549 systemd-logind[1456]: Removed session 2. Jan 17 00:01:22.107284 systemd[1]: Started sshd@2-188.245.80.168:22-4.153.228.146:34712.service - OpenSSH per-connection server daemon (4.153.228.146:34712). Jan 17 00:01:22.737086 sshd[1655]: Accepted publickey for core from 4.153.228.146 port 34712 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:22.739120 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:22.744979 systemd-logind[1456]: New session 3 of user core. Jan 17 00:01:22.753438 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:01:23.179723 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:23.185028 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:01:23.185875 systemd[1]: sshd@2-188.245.80.168:22-4.153.228.146:34712.service: Deactivated successfully. Jan 17 00:01:23.188616 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:01:23.190761 systemd-logind[1456]: Removed session 3. Jan 17 00:01:23.291431 systemd[1]: Started sshd@3-188.245.80.168:22-4.153.228.146:34724.service - OpenSSH per-connection server daemon (4.153.228.146:34724). Jan 17 00:01:23.903280 sshd[1662]: Accepted publickey for core from 4.153.228.146 port 34724 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:23.905770 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:23.911507 systemd-logind[1456]: New session 4 of user core. Jan 17 00:01:23.919216 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:01:24.335826 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:24.340389 systemd[1]: sshd@3-188.245.80.168:22-4.153.228.146:34724.service: Deactivated successfully. Jan 17 00:01:24.343115 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:01:24.344844 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:01:24.346157 systemd-logind[1456]: Removed session 4. Jan 17 00:01:24.456355 systemd[1]: Started sshd@4-188.245.80.168:22-4.153.228.146:44052.service - OpenSSH per-connection server daemon (4.153.228.146:44052). Jan 17 00:01:25.079866 sshd[1669]: Accepted publickey for core from 4.153.228.146 port 44052 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:25.082542 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:25.087423 systemd-logind[1456]: New session 5 of user core. Jan 17 00:01:25.094302 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:01:25.430731 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:01:25.431074 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:25.450373 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:25.549526 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:25.555103 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:01:25.555575 systemd[1]: sshd@4-188.245.80.168:22-4.153.228.146:44052.service: Deactivated successfully. Jan 17 00:01:25.557873 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:01:25.560647 systemd-logind[1456]: Removed session 5. Jan 17 00:01:25.657666 systemd[1]: Started sshd@5-188.245.80.168:22-4.153.228.146:44066.service - OpenSSH per-connection server daemon (4.153.228.146:44066). Jan 17 00:01:26.278519 sshd[1677]: Accepted publickey for core from 4.153.228.146 port 44066 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:26.280688 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:26.285610 systemd-logind[1456]: New session 6 of user core. Jan 17 00:01:26.294258 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:01:26.615021 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:01:26.615476 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:26.616777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:01:26.633255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:26.636132 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:26.642271 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:01:26.642846 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:26.662442 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:26.667413 auditctl[1687]: No rules Jan 17 00:01:26.668844 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:01:26.669107 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:26.683583 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:26.711035 augenrules[1705]: No rules Jan 17 00:01:26.712103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:26.713364 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:26.769567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:26.783480 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:26.811870 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:26.817299 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:01:26.817975 systemd[1]: sshd@5-188.245.80.168:22-4.153.228.146:44066.service: Deactivated successfully. Jan 17 00:01:26.821851 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:01:26.824983 systemd-logind[1456]: Removed session 6. Jan 17 00:01:26.842939 kubelet[1715]: E0117 00:01:26.842127 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:26.845093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:26.845341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:26.943499 systemd[1]: Started sshd@6-188.245.80.168:22-4.153.228.146:44080.service - OpenSSH per-connection server daemon (4.153.228.146:44080). Jan 17 00:01:27.595856 sshd[1725]: Accepted publickey for core from 4.153.228.146 port 44080 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:27.597872 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:27.602987 systemd-logind[1456]: New session 7 of user core. Jan 17 00:01:27.611262 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:01:27.952790 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:01:27.953122 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:28.249316 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:01:28.249380 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:01:28.495169 dockerd[1743]: time="2026-01-17T00:01:28.494837104Z" level=info msg="Starting up" Jan 17 00:01:28.591995 dockerd[1743]: time="2026-01-17T00:01:28.591869435Z" level=info msg="Loading containers: start." Jan 17 00:01:28.701952 kernel: Initializing XFRM netlink socket Jan 17 00:01:28.788207 systemd-networkd[1372]: docker0: Link UP Jan 17 00:01:28.814678 dockerd[1743]: time="2026-01-17T00:01:28.814572371Z" level=info msg="Loading containers: done." Jan 17 00:01:28.832597 dockerd[1743]: time="2026-01-17T00:01:28.832518955Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:01:28.832771 dockerd[1743]: time="2026-01-17T00:01:28.832655944Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:01:28.832848 dockerd[1743]: time="2026-01-17T00:01:28.832803347Z" level=info msg="Daemon has completed initialization" Jan 17 00:01:28.876078 dockerd[1743]: time="2026-01-17T00:01:28.875266623Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:01:28.878029 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:01:28.887028 update_engine[1457]: I20260117 00:01:28.886953 1457 update_attempter.cc:509] Updating boot flags... Jan 17 00:01:28.933976 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1883) Jan 17 00:01:29.025347 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1884) Jan 17 00:01:29.901461 containerd[1486]: time="2026-01-17T00:01:29.901128186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:01:30.572845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843115777.mount: Deactivated successfully. Jan 17 00:01:31.766995 containerd[1486]: time="2026-01-17T00:01:31.765823387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:31.767522 containerd[1486]: time="2026-01-17T00:01:31.767491523Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 17 00:01:31.768257 containerd[1486]: time="2026-01-17T00:01:31.768228486Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:31.771195 containerd[1486]: time="2026-01-17T00:01:31.771166639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:31.772431 containerd[1486]: time="2026-01-17T00:01:31.772395125Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.87121628s" Jan 17 00:01:31.772498 containerd[1486]: time="2026-01-17T00:01:31.772431143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:01:31.773215 containerd[1486]: time="2026-01-17T00:01:31.773104603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:01:33.103706 containerd[1486]: time="2026-01-17T00:01:33.103633967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:33.105416 containerd[1486]: time="2026-01-17T00:01:33.105370876Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 17 00:01:33.105913 containerd[1486]: time="2026-01-17T00:01:33.105863111Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:33.110855 containerd[1486]: time="2026-01-17T00:01:33.109580092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:33.110855 containerd[1486]: time="2026-01-17T00:01:33.110717308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.337579881s" Jan 17 00:01:33.110855 containerd[1486]: time="2026-01-17T00:01:33.110752229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:01:33.111520 containerd[1486]: time="2026-01-17T00:01:33.111490482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:01:34.323986 containerd[1486]: time="2026-01-17T00:01:34.323924891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:34.325791 containerd[1486]: time="2026-01-17T00:01:34.325163664Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 17 00:01:34.326919 containerd[1486]: time="2026-01-17T00:01:34.326861495Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:34.330805 containerd[1486]: time="2026-01-17T00:01:34.330763244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:34.333031 containerd[1486]: time="2026-01-17T00:01:34.332994072Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.221459473s" Jan 17 00:01:34.333173 containerd[1486]: time="2026-01-17T00:01:34.333155646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:01:34.333893 containerd[1486]: time="2026-01-17T00:01:34.333772422Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:01:35.318039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132098375.mount: Deactivated successfully. Jan 17 00:01:35.624687 containerd[1486]: time="2026-01-17T00:01:35.624371127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:35.625536 containerd[1486]: time="2026-01-17T00:01:35.625490573Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 17 00:01:35.627506 containerd[1486]: time="2026-01-17T00:01:35.626678501Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:35.629180 containerd[1486]: time="2026-01-17T00:01:35.629144357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:35.629826 containerd[1486]: time="2026-01-17T00:01:35.629798274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.295766009s" Jan 17 00:01:35.629936 containerd[1486]: time="2026-01-17T00:01:35.629919583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:01:35.630922 containerd[1486]: time="2026-01-17T00:01:35.630887252Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:01:36.271003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809810109.mount: Deactivated successfully. Jan 17 00:01:36.918523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:01:36.930550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:37.070115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:37.074737 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:37.090538 containerd[1486]: time="2026-01-17T00:01:37.089139894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.093558 containerd[1486]: time="2026-01-17T00:01:37.093514650Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 17 00:01:37.095133 containerd[1486]: time="2026-01-17T00:01:37.095091571Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.109533 containerd[1486]: time="2026-01-17T00:01:37.109471969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.110871 containerd[1486]: time="2026-01-17T00:01:37.110822074Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.479888467s" Jan 17 00:01:37.111169 containerd[1486]: time="2026-01-17T00:01:37.111054024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:01:37.111586 containerd[1486]: time="2026-01-17T00:01:37.111552056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:01:37.131354 kubelet[2030]: E0117 00:01:37.131280 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:37.134794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:37.134990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:37.708376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202161856.mount: Deactivated successfully. Jan 17 00:01:37.715985 containerd[1486]: time="2026-01-17T00:01:37.715151297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.716997 containerd[1486]: time="2026-01-17T00:01:37.716965141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 17 00:01:37.718280 containerd[1486]: time="2026-01-17T00:01:37.718249227Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.721549 containerd[1486]: time="2026-01-17T00:01:37.721513568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:37.722531 containerd[1486]: time="2026-01-17T00:01:37.722498723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 610.827383ms" Jan 17 00:01:37.722999 containerd[1486]: time="2026-01-17T00:01:37.722979026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:01:37.723768 containerd[1486]: time="2026-01-17T00:01:37.723719557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:01:38.377022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179550402.mount: Deactivated successfully. Jan 17 00:01:40.282161 containerd[1486]: time="2026-01-17T00:01:40.282089000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:40.284002 containerd[1486]: time="2026-01-17T00:01:40.283651885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 17 00:01:40.285933 containerd[1486]: time="2026-01-17T00:01:40.285097236Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:40.290534 containerd[1486]: time="2026-01-17T00:01:40.290467210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:40.293649 containerd[1486]: time="2026-01-17T00:01:40.293588492Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.569510651s" Jan 17 00:01:40.293798 containerd[1486]: time="2026-01-17T00:01:40.293780842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:01:45.283237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:45.301705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:45.333953 systemd[1]: Reloading requested from client PID 2122 ('systemctl') (unit session-7.scope)... Jan 17 00:01:45.333972 systemd[1]: Reloading... Jan 17 00:01:45.460936 zram_generator::config[2159]: No configuration found. Jan 17 00:01:45.575560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:45.658716 systemd[1]: Reloading finished in 324 ms. Jan 17 00:01:45.704378 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:01:45.704467 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:01:45.704758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:45.710397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:45.825777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:45.830747 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:01:45.872403 kubelet[2210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:45.873236 kubelet[2210]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:01:45.873290 kubelet[2210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:45.873449 kubelet[2210]: I0117 00:01:45.873410 2210 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:01:46.718786 kubelet[2210]: I0117 00:01:46.718729 2210 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:01:46.719034 kubelet[2210]: I0117 00:01:46.719014 2210 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:01:46.719717 kubelet[2210]: I0117 00:01:46.719685 2210 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:01:46.748829 kubelet[2210]: E0117 00:01:46.748776 2210 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.80.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:46.751899 kubelet[2210]: I0117 00:01:46.751799 2210 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:01:46.759822 kubelet[2210]: E0117 00:01:46.759780 2210 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:01:46.759822 kubelet[2210]: I0117 00:01:46.759812 2210 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:01:46.762063 kubelet[2210]: I0117 00:01:46.762027 2210 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:01:46.762982 kubelet[2210]: I0117 00:01:46.762897 2210 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:01:46.763191 kubelet[2210]: I0117 00:01:46.762974 2210 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-5d990e87a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:01:46.763288 kubelet[2210]: I0117 00:01:46.763251 2210 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:01:46.763288 kubelet[2210]: I0117 00:01:46.763260 2210 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:01:46.763490 kubelet[2210]: I0117 00:01:46.763458 2210 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:46.766920 kubelet[2210]: I0117 00:01:46.766886 2210 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:01:46.767016 kubelet[2210]: I0117 00:01:46.766965 2210 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:01:46.767016 kubelet[2210]: I0117 00:01:46.766985 2210 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:01:46.767016 kubelet[2210]: I0117 00:01:46.766996 2210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:01:46.773246 kubelet[2210]: W0117 00:01:46.771867 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.80.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:46.773246 kubelet[2210]: E0117 00:01:46.771961 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.80.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:46.773246 kubelet[2210]: W0117 00:01:46.772035 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.80.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-5d990e87a1&limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:46.773246 kubelet[2210]: E0117 00:01:46.772063 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.80.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-5d990e87a1&limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:46.773443 kubelet[2210]: I0117 00:01:46.773393 2210 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:01:46.774084 kubelet[2210]: I0117 00:01:46.774058 2210 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:01:46.774194 kubelet[2210]: W0117 00:01:46.774180 2210 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:01:46.775062 kubelet[2210]: I0117 00:01:46.775030 2210 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:01:46.775062 kubelet[2210]: I0117 00:01:46.775067 2210 server.go:1287] "Started kubelet" Jan 17 00:01:46.782661 kubelet[2210]: I0117 00:01:46.782631 2210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:01:46.784717 kubelet[2210]: E0117 00:01:46.783700 2210 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.80.168:6443/api/v1/namespaces/default/events\": dial tcp 188.245.80.168:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-5d990e87a1.188b5bb3df33f576 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-5d990e87a1,UID:ci-4081-3-6-n-5d990e87a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-5d990e87a1,},FirstTimestamp:2026-01-17 00:01:46.77504959 +0000 UTC m=+0.941029989,LastTimestamp:2026-01-17 00:01:46.77504959 +0000 UTC m=+0.941029989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-5d990e87a1,}" Jan 17 00:01:46.786621 kubelet[2210]: E0117 00:01:46.786601 2210 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:01:46.788362 kubelet[2210]: I0117 00:01:46.788323 2210 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:01:46.789510 kubelet[2210]: I0117 00:01:46.789476 2210 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:01:46.789819 kubelet[2210]: E0117 00:01:46.789795 2210 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" Jan 17 00:01:46.791935 kubelet[2210]: I0117 00:01:46.790333 2210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:01:46.791935 kubelet[2210]: I0117 00:01:46.790681 2210 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:01:46.791935 kubelet[2210]: I0117 00:01:46.790884 2210 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:01:46.791935 kubelet[2210]: E0117 00:01:46.791538 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.80.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5d990e87a1?timeout=10s\": dial tcp 188.245.80.168:6443: connect: connection refused" interval="200ms" Jan 17 00:01:46.791935 kubelet[2210]: I0117 00:01:46.789486 2210 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:01:46.793489 kubelet[2210]: I0117 00:01:46.793466 2210 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:01:46.793975 kubelet[2210]: I0117 00:01:46.793948 2210 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:01:46.794077 kubelet[2210]: W0117 00:01:46.793954 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.80.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:46.794245 kubelet[2210]: E0117 00:01:46.794217 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.80.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:46.794478 kubelet[2210]: I0117 00:01:46.794462 2210 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:01:46.794632 kubelet[2210]: I0117 00:01:46.794615 2210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:01:46.796664 kubelet[2210]: I0117 00:01:46.796637 2210 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:01:46.808873 kubelet[2210]: I0117 00:01:46.808811 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:01:46.810323 kubelet[2210]: I0117 00:01:46.810255 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:01:46.810323 kubelet[2210]: I0117 00:01:46.810285 2210 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:01:46.810323 kubelet[2210]: I0117 00:01:46.810307 2210 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:01:46.810323 kubelet[2210]: I0117 00:01:46.810313 2210 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:01:46.810709 kubelet[2210]: E0117 00:01:46.810362 2210 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:01:46.822151 kubelet[2210]: W0117 00:01:46.821803 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.80.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:46.822151 kubelet[2210]: E0117 00:01:46.821859 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.80.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:46.830702 kubelet[2210]: I0117 00:01:46.830636 2210 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:01:46.830702 kubelet[2210]: I0117 00:01:46.830655 2210 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:01:46.830702 kubelet[2210]: I0117 00:01:46.830697 2210 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:46.833211 kubelet[2210]: I0117 00:01:46.833174 2210 policy_none.go:49] "None policy: Start" Jan 17 00:01:46.833311 kubelet[2210]: I0117 00:01:46.833220 2210 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:01:46.833311 kubelet[2210]: I0117 00:01:46.833247 2210 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:01:46.840527 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:01:46.852229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:01:46.867993 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:01:46.870219 kubelet[2210]: I0117 00:01:46.869569 2210 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:01:46.870219 kubelet[2210]: I0117 00:01:46.869796 2210 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:01:46.870219 kubelet[2210]: I0117 00:01:46.869808 2210 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:01:46.870219 kubelet[2210]: I0117 00:01:46.870108 2210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:01:46.872017 kubelet[2210]: E0117 00:01:46.871992 2210 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:01:46.872170 kubelet[2210]: E0117 00:01:46.872156 2210 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-5d990e87a1\" not found" Jan 17 00:01:46.926083 systemd[1]: Created slice kubepods-burstable-podce8a02dc48ac4e33a90a407e197d3171.slice - libcontainer container kubepods-burstable-podce8a02dc48ac4e33a90a407e197d3171.slice. Jan 17 00:01:46.939081 kubelet[2210]: E0117 00:01:46.938558 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.943108 systemd[1]: Created slice kubepods-burstable-pod9c20e0248ac1d391be5a2bd34f71a9f8.slice - libcontainer container kubepods-burstable-pod9c20e0248ac1d391be5a2bd34f71a9f8.slice. Jan 17 00:01:46.946529 kubelet[2210]: E0117 00:01:46.945946 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.949421 systemd[1]: Created slice kubepods-burstable-pod6100a1567b5f326b921db9605b4fe3e6.slice - libcontainer container kubepods-burstable-pod6100a1567b5f326b921db9605b4fe3e6.slice. Jan 17 00:01:46.952319 kubelet[2210]: E0117 00:01:46.952144 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.973034 kubelet[2210]: I0117 00:01:46.972669 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.976107 kubelet[2210]: E0117 00:01:46.976062 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.80.168:6443/api/v1/nodes\": dial tcp 188.245.80.168:6443: connect: connection refused" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.992890 kubelet[2210]: E0117 00:01:46.992821 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.80.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5d990e87a1?timeout=10s\": dial tcp 188.245.80.168:6443: connect: connection refused" interval="400ms" Jan 17 00:01:46.996400 kubelet[2210]: I0117 00:01:46.996138 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.996400 kubelet[2210]: I0117 00:01:46.996265 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.996400 kubelet[2210]: I0117 00:01:46.996333 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997085 kubelet[2210]: I0117 00:01:46.996369 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997085 kubelet[2210]: I0117 00:01:46.996781 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997085 kubelet[2210]: I0117 00:01:46.996822 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997085 kubelet[2210]: I0117 00:01:46.996857 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997085 kubelet[2210]: I0117 00:01:46.996934 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce8a02dc48ac4e33a90a407e197d3171-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-5d990e87a1\" (UID: \"ce8a02dc48ac4e33a90a407e197d3171\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:46.997374 kubelet[2210]: I0117 00:01:46.997005 2210 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:47.178189 kubelet[2210]: I0117 00:01:47.178121 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:47.178773 kubelet[2210]: E0117 00:01:47.178698 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.80.168:6443/api/v1/nodes\": dial tcp 188.245.80.168:6443: connect: connection refused" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:47.240414 containerd[1486]: time="2026-01-17T00:01:47.240121623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-5d990e87a1,Uid:ce8a02dc48ac4e33a90a407e197d3171,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:47.248215 containerd[1486]: time="2026-01-17T00:01:47.247824415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-5d990e87a1,Uid:9c20e0248ac1d391be5a2bd34f71a9f8,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:47.253677 containerd[1486]: time="2026-01-17T00:01:47.253604871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-5d990e87a1,Uid:6100a1567b5f326b921db9605b4fe3e6,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:47.393747 kubelet[2210]: E0117 00:01:47.393645 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.80.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5d990e87a1?timeout=10s\": dial tcp 188.245.80.168:6443: connect: connection refused" interval="800ms" Jan 17 00:01:47.581451 kubelet[2210]: I0117 00:01:47.581312 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:47.582042 kubelet[2210]: E0117 00:01:47.581959 2210 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.80.168:6443/api/v1/nodes\": dial tcp 188.245.80.168:6443: connect: connection refused" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:47.738984 kubelet[2210]: W0117 00:01:47.738778 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.80.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:47.738984 kubelet[2210]: E0117 00:01:47.738878 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.80.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:47.780596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530799035.mount: Deactivated successfully. Jan 17 00:01:47.788795 containerd[1486]: time="2026-01-17T00:01:47.787063986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:47.791426 containerd[1486]: time="2026-01-17T00:01:47.791383623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 17 00:01:47.792664 containerd[1486]: time="2026-01-17T00:01:47.792628031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:47.794113 containerd[1486]: time="2026-01-17T00:01:47.794068524Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:47.795671 containerd[1486]: time="2026-01-17T00:01:47.795642169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:01:47.796647 containerd[1486]: time="2026-01-17T00:01:47.796623196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:01:47.798022 containerd[1486]: time="2026-01-17T00:01:47.797995351Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:47.801249 containerd[1486]: time="2026-01-17T00:01:47.801197648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:47.802081 containerd[1486]: time="2026-01-17T00:01:47.802057372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.091655ms" Jan 17 00:01:47.805870 containerd[1486]: time="2026-01-17T00:01:47.805825865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.612736ms" Jan 17 00:01:47.809543 containerd[1486]: time="2026-01-17T00:01:47.809171082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.449044ms" Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.938128954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.938178235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.938193608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.937368834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.937989236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.938003528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.938324 containerd[1486]: time="2026-01-17T00:01:47.938107256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.939926 containerd[1486]: time="2026-01-17T00:01:47.938278480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.944694 containerd[1486]: time="2026-01-17T00:01:47.944293985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:47.944694 containerd[1486]: time="2026-01-17T00:01:47.944360481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:47.944694 containerd[1486]: time="2026-01-17T00:01:47.944466290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.944694 containerd[1486]: time="2026-01-17T00:01:47.944590755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:47.968114 systemd[1]: Started cri-containerd-a5e3415116fc2be66aa3a19c7e3c0f8df25322b94cf2681b18484f3847b57ef6.scope - libcontainer container a5e3415116fc2be66aa3a19c7e3c0f8df25322b94cf2681b18484f3847b57ef6. Jan 17 00:01:47.973124 systemd[1]: Started cri-containerd-b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4.scope - libcontainer container b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4. Jan 17 00:01:47.979066 systemd[1]: Started cri-containerd-5f16efe4cbfeac850e505fad604b6a0c57c8c9690119f87a4c5b9d462475b1d0.scope - libcontainer container 5f16efe4cbfeac850e505fad604b6a0c57c8c9690119f87a4c5b9d462475b1d0. Jan 17 00:01:48.026937 containerd[1486]: time="2026-01-17T00:01:48.026074679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-5d990e87a1,Uid:9c20e0248ac1d391be5a2bd34f71a9f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5e3415116fc2be66aa3a19c7e3c0f8df25322b94cf2681b18484f3847b57ef6\"" Jan 17 00:01:48.043768 containerd[1486]: time="2026-01-17T00:01:48.043015292Z" level=info msg="CreateContainer within sandbox \"a5e3415116fc2be66aa3a19c7e3c0f8df25322b94cf2681b18484f3847b57ef6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:01:48.052283 containerd[1486]: time="2026-01-17T00:01:48.052226262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-5d990e87a1,Uid:6100a1567b5f326b921db9605b4fe3e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4\"" Jan 17 00:01:48.056863 containerd[1486]: time="2026-01-17T00:01:48.056818692Z" level=info msg="CreateContainer within sandbox \"b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:01:48.063047 containerd[1486]: time="2026-01-17T00:01:48.062939663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-5d990e87a1,Uid:ce8a02dc48ac4e33a90a407e197d3171,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f16efe4cbfeac850e505fad604b6a0c57c8c9690119f87a4c5b9d462475b1d0\"" Jan 17 00:01:48.066515 containerd[1486]: time="2026-01-17T00:01:48.066400284Z" level=info msg="CreateContainer within sandbox \"5f16efe4cbfeac850e505fad604b6a0c57c8c9690119f87a4c5b9d462475b1d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:01:48.082857 containerd[1486]: time="2026-01-17T00:01:48.082680104Z" level=info msg="CreateContainer within sandbox \"b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e\"" Jan 17 00:01:48.083403 containerd[1486]: time="2026-01-17T00:01:48.083303013Z" level=info msg="CreateContainer within sandbox \"a5e3415116fc2be66aa3a19c7e3c0f8df25322b94cf2681b18484f3847b57ef6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f569e4dc7aab682c594183e76cf245691bc49e3ebac6d89d8668f47bb7638314\"" Jan 17 00:01:48.084098 containerd[1486]: time="2026-01-17T00:01:48.084068806Z" level=info msg="StartContainer for \"9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e\"" Jan 17 00:01:48.084417 containerd[1486]: time="2026-01-17T00:01:48.084145533Z" level=info msg="StartContainer for \"f569e4dc7aab682c594183e76cf245691bc49e3ebac6d89d8668f47bb7638314\"" Jan 17 00:01:48.090182 containerd[1486]: time="2026-01-17T00:01:48.090047054Z" level=info msg="CreateContainer within sandbox \"5f16efe4cbfeac850e505fad604b6a0c57c8c9690119f87a4c5b9d462475b1d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ce772592d2982a711368af4a7dfc30cf3372b11ac3c7b294364fa51da54fe12\"" Jan 17 00:01:48.092396 containerd[1486]: time="2026-01-17T00:01:48.091261677Z" level=info msg="StartContainer for \"5ce772592d2982a711368af4a7dfc30cf3372b11ac3c7b294364fa51da54fe12\"" Jan 17 00:01:48.119578 systemd[1]: Started cri-containerd-9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e.scope - libcontainer container 9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e. Jan 17 00:01:48.133031 systemd[1]: Started cri-containerd-f569e4dc7aab682c594183e76cf245691bc49e3ebac6d89d8668f47bb7638314.scope - libcontainer container f569e4dc7aab682c594183e76cf245691bc49e3ebac6d89d8668f47bb7638314. Jan 17 00:01:48.155568 systemd[1]: Started cri-containerd-5ce772592d2982a711368af4a7dfc30cf3372b11ac3c7b294364fa51da54fe12.scope - libcontainer container 5ce772592d2982a711368af4a7dfc30cf3372b11ac3c7b294364fa51da54fe12. Jan 17 00:01:48.176347 kubelet[2210]: W0117 00:01:48.176285 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.80.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:48.176347 kubelet[2210]: E0117 00:01:48.176349 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.80.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:48.196011 kubelet[2210]: E0117 00:01:48.194952 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.80.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-5d990e87a1?timeout=10s\": dial tcp 188.245.80.168:6443: connect: connection refused" interval="1.6s" Jan 17 00:01:48.200222 containerd[1486]: time="2026-01-17T00:01:48.200103951Z" level=info msg="StartContainer for \"f569e4dc7aab682c594183e76cf245691bc49e3ebac6d89d8668f47bb7638314\" returns successfully" Jan 17 00:01:48.205242 kubelet[2210]: W0117 00:01:48.205164 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.80.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-5d990e87a1&limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:48.206155 kubelet[2210]: E0117 00:01:48.205248 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.80.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-5d990e87a1&limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:48.206232 containerd[1486]: time="2026-01-17T00:01:48.205561607Z" level=info msg="StartContainer for \"9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e\" returns successfully" Jan 17 00:01:48.213882 kubelet[2210]: W0117 00:01:48.213682 2210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.80.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.80.168:6443: connect: connection refused Jan 17 00:01:48.213882 kubelet[2210]: E0117 00:01:48.213759 2210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.80.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.80.168:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:48.260313 containerd[1486]: time="2026-01-17T00:01:48.260254213Z" level=info msg="StartContainer for \"5ce772592d2982a711368af4a7dfc30cf3372b11ac3c7b294364fa51da54fe12\" returns successfully" Jan 17 00:01:48.385634 kubelet[2210]: I0117 00:01:48.385597 2210 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:48.858232 kubelet[2210]: E0117 00:01:48.858190 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:48.862426 kubelet[2210]: E0117 00:01:48.862395 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:48.863807 kubelet[2210]: E0117 00:01:48.863781 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:49.864291 kubelet[2210]: E0117 00:01:49.864254 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:49.865111 kubelet[2210]: E0117 00:01:49.865092 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.013199 kubelet[2210]: E0117 00:01:51.013163 2210 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.285392 kubelet[2210]: E0117 00:01:51.285135 2210 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-5d990e87a1\" not found" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.359216 kubelet[2210]: I0117 00:01:51.359173 2210 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.359216 kubelet[2210]: E0117 00:01:51.359215 2210 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-5d990e87a1\": node \"ci-4081-3-6-n-5d990e87a1\" not found" Jan 17 00:01:51.391502 kubelet[2210]: I0117 00:01:51.391463 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.418053 kubelet[2210]: E0117 00:01:51.417937 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-n-5d990e87a1.188b5bb3df33f576 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-5d990e87a1,UID:ci-4081-3-6-n-5d990e87a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-5d990e87a1,},FirstTimestamp:2026-01-17 00:01:46.77504959 +0000 UTC m=+0.941029989,LastTimestamp:2026-01-17 00:01:46.77504959 +0000 UTC m=+0.941029989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-5d990e87a1,}" Jan 17 00:01:51.418502 kubelet[2210]: E0117 00:01:51.418459 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-5d990e87a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.418502 kubelet[2210]: I0117 00:01:51.418488 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.425968 kubelet[2210]: E0117 00:01:51.425928 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.425968 kubelet[2210]: I0117 00:01:51.425962 2210 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.431365 kubelet[2210]: E0117 00:01:51.431324 2210 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:51.773630 kubelet[2210]: I0117 00:01:51.773327 2210 apiserver.go:52] "Watching apiserver" Jan 17 00:01:51.794219 kubelet[2210]: I0117 00:01:51.794167 2210 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:01:53.303961 systemd[1]: Reloading requested from client PID 2482 ('systemctl') (unit session-7.scope)... Jan 17 00:01:53.303980 systemd[1]: Reloading... Jan 17 00:01:53.409942 zram_generator::config[2525]: No configuration found. Jan 17 00:01:53.521535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:53.626806 systemd[1]: Reloading finished in 322 ms. Jan 17 00:01:53.672449 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:53.689865 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:01:53.691033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:53.691132 systemd[1]: kubelet.service: Consumed 1.369s CPU time, 128.7M memory peak, 0B memory swap peak. Jan 17 00:01:53.701160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:53.839358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:53.844671 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:01:53.901041 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:53.903149 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:01:53.903921 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:53.903921 kubelet[2566]: I0117 00:01:53.903565 2566 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:01:53.915614 kubelet[2566]: I0117 00:01:53.915582 2566 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:01:53.915769 kubelet[2566]: I0117 00:01:53.915758 2566 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:01:53.916285 kubelet[2566]: I0117 00:01:53.916262 2566 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:01:53.917878 kubelet[2566]: I0117 00:01:53.917851 2566 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:01:53.920458 kubelet[2566]: I0117 00:01:53.920433 2566 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:01:53.926827 kubelet[2566]: E0117 00:01:53.926785 2566 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:01:53.927947 kubelet[2566]: I0117 00:01:53.927044 2566 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:01:53.930104 kubelet[2566]: I0117 00:01:53.930082 2566 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:01:53.930552 kubelet[2566]: I0117 00:01:53.930514 2566 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:01:53.930806 kubelet[2566]: I0117 00:01:53.930628 2566 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-5d990e87a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:01:53.930948 kubelet[2566]: I0117 00:01:53.930933 2566 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:01:53.931022 kubelet[2566]: I0117 00:01:53.931014 2566 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:01:53.931114 kubelet[2566]: I0117 00:01:53.931105 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:53.931336 kubelet[2566]: I0117 00:01:53.931323 2566 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:01:53.932091 kubelet[2566]: I0117 00:01:53.932043 2566 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:01:53.932190 kubelet[2566]: I0117 00:01:53.932114 2566 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:01:53.932190 kubelet[2566]: I0117 00:01:53.932135 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:01:53.933731 kubelet[2566]: I0117 00:01:53.933704 2566 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:01:53.934523 kubelet[2566]: I0117 00:01:53.934497 2566 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:01:53.935685 kubelet[2566]: I0117 00:01:53.935665 2566 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:01:53.935981 kubelet[2566]: I0117 00:01:53.935791 2566 server.go:1287] "Started kubelet" Jan 17 00:01:53.939954 kubelet[2566]: I0117 00:01:53.939780 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:01:53.947251 kubelet[2566]: I0117 00:01:53.947218 2566 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:01:53.948413 kubelet[2566]: I0117 00:01:53.948394 2566 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:01:53.948737 kubelet[2566]: E0117 00:01:53.948712 2566 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-5d990e87a1\" not found" Jan 17 00:01:53.950990 kubelet[2566]: I0117 00:01:53.950966 2566 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:01:53.951802 kubelet[2566]: I0117 00:01:53.951205 2566 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:01:53.954943 kubelet[2566]: I0117 00:01:53.953164 2566 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:01:53.954943 kubelet[2566]: I0117 00:01:53.954154 2566 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:01:53.955286 kubelet[2566]: I0117 00:01:53.955240 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:01:53.955542 kubelet[2566]: I0117 00:01:53.955527 2566 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:01:53.958428 kubelet[2566]: I0117 00:01:53.958390 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:01:53.959534 kubelet[2566]: I0117 00:01:53.959513 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:01:53.959636 kubelet[2566]: I0117 00:01:53.959620 2566 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:01:53.959700 kubelet[2566]: I0117 00:01:53.959692 2566 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:01:53.959746 kubelet[2566]: I0117 00:01:53.959739 2566 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:01:53.959835 kubelet[2566]: E0117 00:01:53.959818 2566 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:01:53.970031 kubelet[2566]: I0117 00:01:53.970008 2566 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:01:53.970271 kubelet[2566]: I0117 00:01:53.970251 2566 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:01:53.972360 kubelet[2566]: I0117 00:01:53.972325 2566 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:01:53.999355 kubelet[2566]: E0117 00:01:53.999310 2566 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:01:54.050541 kubelet[2566]: I0117 00:01:54.050508 2566 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:01:54.050804 kubelet[2566]: I0117 00:01:54.050788 2566 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:01:54.050920 kubelet[2566]: I0117 00:01:54.050898 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:54.051223 kubelet[2566]: I0117 00:01:54.051203 2566 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:01:54.051325 kubelet[2566]: I0117 00:01:54.051298 2566 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:01:54.051391 kubelet[2566]: I0117 00:01:54.051382 2566 policy_none.go:49] "None policy: Start" Jan 17 00:01:54.051491 kubelet[2566]: I0117 00:01:54.051477 2566 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:01:54.051569 kubelet[2566]: I0117 00:01:54.051558 2566 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:01:54.051805 kubelet[2566]: I0117 00:01:54.051787 2566 state_mem.go:75] "Updated machine memory state" Jan 17 00:01:54.056819 kubelet[2566]: I0117 00:01:54.056795 2566 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:01:54.057404 kubelet[2566]: I0117 00:01:54.057389 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:01:54.057557 kubelet[2566]: I0117 00:01:54.057521 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:01:54.057822 kubelet[2566]: I0117 00:01:54.057808 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:01:54.060786 kubelet[2566]: I0117 00:01:54.060750 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.062086 kubelet[2566]: I0117 00:01:54.062068 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.064464 kubelet[2566]: I0117 00:01:54.064432 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.069933 kubelet[2566]: E0117 00:01:54.067251 2566 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:01:54.153062 kubelet[2566]: I0117 00:01:54.152606 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155096 kubelet[2566]: I0117 00:01:54.155015 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155374 kubelet[2566]: I0117 00:01:54.155097 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155374 kubelet[2566]: I0117 00:01:54.155139 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155374 kubelet[2566]: I0117 00:01:54.155176 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155374 kubelet[2566]: I0117 00:01:54.155213 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c20e0248ac1d391be5a2bd34f71a9f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" (UID: \"9c20e0248ac1d391be5a2bd34f71a9f8\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155374 kubelet[2566]: I0117 00:01:54.155250 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155602 kubelet[2566]: I0117 00:01:54.155291 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6100a1567b5f326b921db9605b4fe3e6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-5d990e87a1\" (UID: \"6100a1567b5f326b921db9605b4fe3e6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.155602 kubelet[2566]: I0117 00:01:54.155328 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce8a02dc48ac4e33a90a407e197d3171-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-5d990e87a1\" (UID: \"ce8a02dc48ac4e33a90a407e197d3171\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.168395 kubelet[2566]: I0117 00:01:54.168339 2566 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.184602 kubelet[2566]: I0117 00:01:54.184557 2566 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.185488 kubelet[2566]: I0117 00:01:54.185267 2566 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:54.285350 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:01:54.285703 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:01:54.732830 sudo[2597]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:54.947052 kubelet[2566]: I0117 00:01:54.946848 2566 apiserver.go:52] "Watching apiserver" Jan 17 00:01:55.026029 kubelet[2566]: I0117 00:01:55.023056 2566 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:55.034150 kubelet[2566]: E0117 00:01:55.034100 2566 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-5d990e87a1\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" Jan 17 00:01:55.051614 kubelet[2566]: I0117 00:01:55.051578 2566 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:01:55.056578 kubelet[2566]: I0117 00:01:55.056319 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-5d990e87a1" podStartSLOduration=1.056285241 podStartE2EDuration="1.056285241s" podCreationTimestamp="2026-01-17 00:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:55.055875925 +0000 UTC m=+1.207154342" watchObservedRunningTime="2026-01-17 00:01:55.056285241 +0000 UTC m=+1.207563698" Jan 17 00:01:55.068885 kubelet[2566]: I0117 00:01:55.068815 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-5d990e87a1" podStartSLOduration=1.068796569 podStartE2EDuration="1.068796569s" podCreationTimestamp="2026-01-17 00:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:55.068449341 +0000 UTC m=+1.219727758" watchObservedRunningTime="2026-01-17 00:01:55.068796569 +0000 UTC m=+1.220074986" Jan 17 00:01:55.102993 kubelet[2566]: I0117 00:01:55.102833 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-5d990e87a1" podStartSLOduration=1.102814082 podStartE2EDuration="1.102814082s" podCreationTimestamp="2026-01-17 00:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:55.084797468 +0000 UTC m=+1.236075925" watchObservedRunningTime="2026-01-17 00:01:55.102814082 +0000 UTC m=+1.254092499" Jan 17 00:01:56.903413 sudo[1728]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:57.007548 sshd[1725]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:57.012186 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:01:57.012770 systemd[1]: sshd@6-188.245.80.168:22-4.153.228.146:44080.service: Deactivated successfully. Jan 17 00:01:57.014701 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:01:57.015075 systemd[1]: session-7.scope: Consumed 7.306s CPU time, 149.3M memory peak, 0B memory swap peak. Jan 17 00:01:57.016325 systemd-logind[1456]: Removed session 7. Jan 17 00:01:59.729090 kubelet[2566]: I0117 00:01:59.729010 2566 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:01:59.730113 containerd[1486]: time="2026-01-17T00:01:59.730038850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:01:59.733395 kubelet[2566]: I0117 00:01:59.730272 2566 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:01:59.895932 systemd[1]: Created slice kubepods-besteffort-pod3ba0b65c_8c7c_4dd0_b095_6a3002317f7f.slice - libcontainer container kubepods-besteffort-pod3ba0b65c_8c7c_4dd0_b095_6a3002317f7f.slice. Jan 17 00:01:59.917736 systemd[1]: Created slice kubepods-burstable-pod8d2c870c_48d3_4bc7_9d48_5071db9f73bc.slice - libcontainer container kubepods-burstable-pod8d2c870c_48d3_4bc7_9d48_5071db9f73bc.slice. Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991742 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-run\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991789 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-kube-proxy\") pod \"kube-proxy-jpjfx\" (UID: \"3ba0b65c-8c7c-4dd0-b095-6a3002317f7f\") " pod="kube-system/kube-proxy-jpjfx" Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991811 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drrlw\" (UniqueName: \"kubernetes.io/projected/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-kube-api-access-drrlw\") pod \"kube-proxy-jpjfx\" (UID: \"3ba0b65c-8c7c-4dd0-b095-6a3002317f7f\") " pod="kube-system/kube-proxy-jpjfx" Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991830 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-bpf-maps\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991845 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-kernel\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.991867 kubelet[2566]: I0117 00:01:59.991861 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-cgroup\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.991876 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-lib-modules\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.991895 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-config-path\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.991942 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cni-path\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.991958 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-xtables-lock\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.991994 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hostproc\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992112 kubelet[2566]: I0117 00:01:59.992014 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-lib-modules\") pod \"kube-proxy-jpjfx\" (UID: \"3ba0b65c-8c7c-4dd0-b095-6a3002317f7f\") " pod="kube-system/kube-proxy-jpjfx" Jan 17 00:01:59.992239 kubelet[2566]: I0117 00:01:59.992034 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-net\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992239 kubelet[2566]: I0117 00:01:59.992049 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-xtables-lock\") pod \"kube-proxy-jpjfx\" (UID: \"3ba0b65c-8c7c-4dd0-b095-6a3002317f7f\") " pod="kube-system/kube-proxy-jpjfx" Jan 17 00:01:59.992239 kubelet[2566]: I0117 00:01:59.992068 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-clustermesh-secrets\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992239 kubelet[2566]: I0117 00:01:59.992098 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-etc-cni-netd\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992239 kubelet[2566]: I0117 00:01:59.992113 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sr9\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:01:59.992338 kubelet[2566]: I0117 00:01:59.992133 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hubble-tls\") pod \"cilium-nkkxv\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " pod="kube-system/cilium-nkkxv" Jan 17 00:02:00.110474 kubelet[2566]: E0117 00:02:00.110367 2566 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:02:00.110474 kubelet[2566]: E0117 00:02:00.110411 2566 projected.go:194] Error preparing data for projected volume kube-api-access-85sr9 for pod kube-system/cilium-nkkxv: configmap "kube-root-ca.crt" not found Jan 17 00:02:00.110738 kubelet[2566]: E0117 00:02:00.110495 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9 podName:8d2c870c-48d3-4bc7-9d48-5071db9f73bc nodeName:}" failed. No retries permitted until 2026-01-17 00:02:00.610455447 +0000 UTC m=+6.761733864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-85sr9" (UniqueName: "kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9") pod "cilium-nkkxv" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc") : configmap "kube-root-ca.crt" not found Jan 17 00:02:00.122240 kubelet[2566]: E0117 00:02:00.122181 2566 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:02:00.122240 kubelet[2566]: E0117 00:02:00.122218 2566 projected.go:194] Error preparing data for projected volume kube-api-access-drrlw for pod kube-system/kube-proxy-jpjfx: configmap "kube-root-ca.crt" not found Jan 17 00:02:00.122415 kubelet[2566]: E0117 00:02:00.122274 2566 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-kube-api-access-drrlw podName:3ba0b65c-8c7c-4dd0-b095-6a3002317f7f nodeName:}" failed. No retries permitted until 2026-01-17 00:02:00.62225489 +0000 UTC m=+6.773533307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-drrlw" (UniqueName: "kubernetes.io/projected/3ba0b65c-8c7c-4dd0-b095-6a3002317f7f-kube-api-access-drrlw") pod "kube-proxy-jpjfx" (UID: "3ba0b65c-8c7c-4dd0-b095-6a3002317f7f") : configmap "kube-root-ca.crt" not found Jan 17 00:02:00.812806 containerd[1486]: time="2026-01-17T00:02:00.812577666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpjfx,Uid:3ba0b65c-8c7c-4dd0-b095-6a3002317f7f,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:00.822829 containerd[1486]: time="2026-01-17T00:02:00.822338939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkkxv,Uid:8d2c870c-48d3-4bc7-9d48-5071db9f73bc,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:00.860791 containerd[1486]: time="2026-01-17T00:02:00.860566426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:00.860791 containerd[1486]: time="2026-01-17T00:02:00.860630905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:00.860791 containerd[1486]: time="2026-01-17T00:02:00.860645993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:00.860791 containerd[1486]: time="2026-01-17T00:02:00.860741690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:00.906763 systemd[1]: Started cri-containerd-a14910117844a3d6f6bcf755e8fb624e71cd7c5a4a742bbcdaeb9bc459946940.scope - libcontainer container a14910117844a3d6f6bcf755e8fb624e71cd7c5a4a742bbcdaeb9bc459946940. Jan 17 00:02:00.915979 containerd[1486]: time="2026-01-17T00:02:00.915669809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:00.915979 containerd[1486]: time="2026-01-17T00:02:00.915742612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:00.915979 containerd[1486]: time="2026-01-17T00:02:00.915758542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:00.924638 systemd[1]: Created slice kubepods-besteffort-pode0fc02e7_6d94_4715_b1f0_228803910e8d.slice - libcontainer container kubepods-besteffort-pode0fc02e7_6d94_4715_b1f0_228803910e8d.slice. Jan 17 00:02:00.927737 containerd[1486]: time="2026-01-17T00:02:00.920535777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:00.954525 systemd[1]: Started cri-containerd-f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73.scope - libcontainer container f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73. Jan 17 00:02:00.970352 containerd[1486]: time="2026-01-17T00:02:00.970212620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpjfx,Uid:3ba0b65c-8c7c-4dd0-b095-6a3002317f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a14910117844a3d6f6bcf755e8fb624e71cd7c5a4a742bbcdaeb9bc459946940\"" Jan 17 00:02:00.977564 containerd[1486]: time="2026-01-17T00:02:00.977239430Z" level=info msg="CreateContainer within sandbox \"a14910117844a3d6f6bcf755e8fb624e71cd7c5a4a742bbcdaeb9bc459946940\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:02:00.992880 containerd[1486]: time="2026-01-17T00:02:00.992836406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkkxv,Uid:8d2c870c-48d3-4bc7-9d48-5071db9f73bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\"" Jan 17 00:02:00.995612 containerd[1486]: time="2026-01-17T00:02:00.995153502Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:02:00.999577 kubelet[2566]: I0117 00:02:00.999514 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0fc02e7-6d94-4715-b1f0-228803910e8d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5z55b\" (UID: \"e0fc02e7-6d94-4715-b1f0-228803910e8d\") " pod="kube-system/cilium-operator-6c4d7847fc-5z55b" Jan 17 00:02:00.999577 kubelet[2566]: I0117 00:02:00.999562 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jlh\" (UniqueName: \"kubernetes.io/projected/e0fc02e7-6d94-4715-b1f0-228803910e8d-kube-api-access-s8jlh\") pod \"cilium-operator-6c4d7847fc-5z55b\" (UID: \"e0fc02e7-6d94-4715-b1f0-228803910e8d\") " pod="kube-system/cilium-operator-6c4d7847fc-5z55b" Jan 17 00:02:01.003658 containerd[1486]: time="2026-01-17T00:02:01.003583307Z" level=info msg="CreateContainer within sandbox \"a14910117844a3d6f6bcf755e8fb624e71cd7c5a4a742bbcdaeb9bc459946940\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"755d220213f6a388972aee72d92457994a090e818526f2e5f30d92f3fe33bc84\"" Jan 17 00:02:01.004678 containerd[1486]: time="2026-01-17T00:02:01.004640383Z" level=info msg="StartContainer for \"755d220213f6a388972aee72d92457994a090e818526f2e5f30d92f3fe33bc84\"" Jan 17 00:02:01.039184 systemd[1]: Started cri-containerd-755d220213f6a388972aee72d92457994a090e818526f2e5f30d92f3fe33bc84.scope - libcontainer container 755d220213f6a388972aee72d92457994a090e818526f2e5f30d92f3fe33bc84. Jan 17 00:02:01.075467 containerd[1486]: time="2026-01-17T00:02:01.075202002Z" level=info msg="StartContainer for \"755d220213f6a388972aee72d92457994a090e818526f2e5f30d92f3fe33bc84\" returns successfully" Jan 17 00:02:01.240023 containerd[1486]: time="2026-01-17T00:02:01.238397817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5z55b,Uid:e0fc02e7-6d94-4715-b1f0-228803910e8d,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:01.288070 containerd[1486]: time="2026-01-17T00:02:01.287130358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:01.288070 containerd[1486]: time="2026-01-17T00:02:01.287195595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:01.288070 containerd[1486]: time="2026-01-17T00:02:01.287214526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:01.288070 containerd[1486]: time="2026-01-17T00:02:01.287300134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:01.321191 systemd[1]: Started cri-containerd-e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa.scope - libcontainer container e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa. Jan 17 00:02:01.357679 containerd[1486]: time="2026-01-17T00:02:01.357178568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5z55b,Uid:e0fc02e7-6d94-4715-b1f0-228803910e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\"" Jan 17 00:02:05.648680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809046012.mount: Deactivated successfully. Jan 17 00:02:07.042232 containerd[1486]: time="2026-01-17T00:02:07.042105369Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:07.044072 containerd[1486]: time="2026-01-17T00:02:07.044034467Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 17 00:02:07.045051 containerd[1486]: time="2026-01-17T00:02:07.045003918Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:07.047286 containerd[1486]: time="2026-01-17T00:02:07.046826170Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.051613115s" Jan 17 00:02:07.047286 containerd[1486]: time="2026-01-17T00:02:07.046866427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 00:02:07.049697 containerd[1486]: time="2026-01-17T00:02:07.049644485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:02:07.050830 containerd[1486]: time="2026-01-17T00:02:07.050795412Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:02:07.065654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155765669.mount: Deactivated successfully. Jan 17 00:02:07.070964 containerd[1486]: time="2026-01-17T00:02:07.070926145Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\"" Jan 17 00:02:07.071741 containerd[1486]: time="2026-01-17T00:02:07.071693190Z" level=info msg="StartContainer for \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\"" Jan 17 00:02:07.112252 systemd[1]: Started cri-containerd-ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe.scope - libcontainer container ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe. Jan 17 00:02:07.143942 containerd[1486]: time="2026-01-17T00:02:07.143407786Z" level=info msg="StartContainer for \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\" returns successfully" Jan 17 00:02:07.162848 systemd[1]: cri-containerd-ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe.scope: Deactivated successfully. Jan 17 00:02:07.360783 containerd[1486]: time="2026-01-17T00:02:07.360230886Z" level=info msg="shim disconnected" id=ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe namespace=k8s.io Jan 17 00:02:07.360783 containerd[1486]: time="2026-01-17T00:02:07.360360981Z" level=warning msg="cleaning up after shim disconnected" id=ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe namespace=k8s.io Jan 17 00:02:07.360783 containerd[1486]: time="2026-01-17T00:02:07.360371745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:08.061705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe-rootfs.mount: Deactivated successfully. Jan 17 00:02:08.077405 containerd[1486]: time="2026-01-17T00:02:08.077358460Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:02:08.096865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460443323.mount: Deactivated successfully. Jan 17 00:02:08.102437 kubelet[2566]: I0117 00:02:08.102041 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpjfx" podStartSLOduration=9.102018214 podStartE2EDuration="9.102018214s" podCreationTimestamp="2026-01-17 00:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:02.065048323 +0000 UTC m=+8.216326780" watchObservedRunningTime="2026-01-17 00:02:08.102018214 +0000 UTC m=+14.253296631" Jan 17 00:02:08.108917 containerd[1486]: time="2026-01-17T00:02:08.108858066Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\"" Jan 17 00:02:08.111136 containerd[1486]: time="2026-01-17T00:02:08.110120178Z" level=info msg="StartContainer for \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\"" Jan 17 00:02:08.141143 systemd[1]: Started cri-containerd-bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165.scope - libcontainer container bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165. Jan 17 00:02:08.174988 containerd[1486]: time="2026-01-17T00:02:08.174940888Z" level=info msg="StartContainer for \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\" returns successfully" Jan 17 00:02:08.186365 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:02:08.187305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:08.187383 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:08.197514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:08.197765 systemd[1]: cri-containerd-bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165.scope: Deactivated successfully. Jan 17 00:02:08.222680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:08.223331 containerd[1486]: time="2026-01-17T00:02:08.222949865Z" level=info msg="shim disconnected" id=bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165 namespace=k8s.io Jan 17 00:02:08.223331 containerd[1486]: time="2026-01-17T00:02:08.223053707Z" level=warning msg="cleaning up after shim disconnected" id=bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165 namespace=k8s.io Jan 17 00:02:08.223331 containerd[1486]: time="2026-01-17T00:02:08.223065272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:09.063998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165-rootfs.mount: Deactivated successfully. Jan 17 00:02:09.080881 containerd[1486]: time="2026-01-17T00:02:09.080802100Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:02:09.102117 containerd[1486]: time="2026-01-17T00:02:09.100164009Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\"" Jan 17 00:02:09.102117 containerd[1486]: time="2026-01-17T00:02:09.100994652Z" level=info msg="StartContainer for \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\"" Jan 17 00:02:09.142179 systemd[1]: Started cri-containerd-6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1.scope - libcontainer container 6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1. Jan 17 00:02:09.178306 containerd[1486]: time="2026-01-17T00:02:09.178246895Z" level=info msg="StartContainer for \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\" returns successfully" Jan 17 00:02:09.186103 systemd[1]: cri-containerd-6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1.scope: Deactivated successfully. Jan 17 00:02:09.208525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1-rootfs.mount: Deactivated successfully. Jan 17 00:02:09.215202 containerd[1486]: time="2026-01-17T00:02:09.215130882Z" level=info msg="shim disconnected" id=6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1 namespace=k8s.io Jan 17 00:02:09.215202 containerd[1486]: time="2026-01-17T00:02:09.215196387Z" level=warning msg="cleaning up after shim disconnected" id=6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1 namespace=k8s.io Jan 17 00:02:09.215667 containerd[1486]: time="2026-01-17T00:02:09.215207791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:10.091687 containerd[1486]: time="2026-01-17T00:02:10.091151207Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:02:10.110528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561581420.mount: Deactivated successfully. Jan 17 00:02:10.116840 containerd[1486]: time="2026-01-17T00:02:10.116424677Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\"" Jan 17 00:02:10.117945 containerd[1486]: time="2026-01-17T00:02:10.117832600Z" level=info msg="StartContainer for \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\"" Jan 17 00:02:10.169333 systemd[1]: Started cri-containerd-163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8.scope - libcontainer container 163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8. Jan 17 00:02:10.205821 systemd[1]: cri-containerd-163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8.scope: Deactivated successfully. Jan 17 00:02:10.207972 containerd[1486]: time="2026-01-17T00:02:10.207248863Z" level=info msg="StartContainer for \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\" returns successfully" Jan 17 00:02:10.265601 containerd[1486]: time="2026-01-17T00:02:10.265230245Z" level=info msg="shim disconnected" id=163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8 namespace=k8s.io Jan 17 00:02:10.265843 containerd[1486]: time="2026-01-17T00:02:10.265665167Z" level=warning msg="cleaning up after shim disconnected" id=163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8 namespace=k8s.io Jan 17 00:02:10.265843 containerd[1486]: time="2026-01-17T00:02:10.265680733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:10.369946 containerd[1486]: time="2026-01-17T00:02:10.369089754Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:10.370968 containerd[1486]: time="2026-01-17T00:02:10.370931679Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 17 00:02:10.372280 containerd[1486]: time="2026-01-17T00:02:10.372223359Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:10.374970 containerd[1486]: time="2026-01-17T00:02:10.374915799Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.325194124s" Jan 17 00:02:10.375077 containerd[1486]: time="2026-01-17T00:02:10.374971460Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 00:02:10.378626 containerd[1486]: time="2026-01-17T00:02:10.378577800Z" level=info msg="CreateContainer within sandbox \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:02:10.401419 containerd[1486]: time="2026-01-17T00:02:10.401375870Z" level=info msg="CreateContainer within sandbox \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\"" Jan 17 00:02:10.403030 containerd[1486]: time="2026-01-17T00:02:10.402068327Z" level=info msg="StartContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\"" Jan 17 00:02:10.428132 systemd[1]: Started cri-containerd-796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a.scope - libcontainer container 796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a. Jan 17 00:02:10.457508 containerd[1486]: time="2026-01-17T00:02:10.457458067Z" level=info msg="StartContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" returns successfully" Jan 17 00:02:11.065510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8-rootfs.mount: Deactivated successfully. Jan 17 00:02:11.097067 containerd[1486]: time="2026-01-17T00:02:11.096934995Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:02:11.113643 containerd[1486]: time="2026-01-17T00:02:11.113513981Z" level=info msg="CreateContainer within sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\"" Jan 17 00:02:11.115926 containerd[1486]: time="2026-01-17T00:02:11.114469721Z" level=info msg="StartContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\"" Jan 17 00:02:11.161308 systemd[1]: Started cri-containerd-47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4.scope - libcontainer container 47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4. Jan 17 00:02:11.219173 containerd[1486]: time="2026-01-17T00:02:11.219042735Z" level=info msg="StartContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" returns successfully" Jan 17 00:02:11.443800 kubelet[2566]: I0117 00:02:11.443769 2566 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:02:11.478790 kubelet[2566]: I0117 00:02:11.477010 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5z55b" podStartSLOduration=2.460703257 podStartE2EDuration="11.476982585s" podCreationTimestamp="2026-01-17 00:02:00 +0000 UTC" firstStartedPulling="2026-01-17 00:02:01.359470501 +0000 UTC m=+7.510748918" lastFinishedPulling="2026-01-17 00:02:10.375749829 +0000 UTC m=+16.527028246" observedRunningTime="2026-01-17 00:02:11.298811633 +0000 UTC m=+17.450090050" watchObservedRunningTime="2026-01-17 00:02:11.476982585 +0000 UTC m=+17.628261042" Jan 17 00:02:11.487028 systemd[1]: Created slice kubepods-burstable-pod4b664cb4_7f33_441a_9441_c3c6c39da172.slice - libcontainer container kubepods-burstable-pod4b664cb4_7f33_441a_9441_c3c6c39da172.slice. Jan 17 00:02:11.497628 systemd[1]: Created slice kubepods-burstable-pod62492353_eeb3_4c66_a0d5_7e4dff81ed2e.slice - libcontainer container kubepods-burstable-pod62492353_eeb3_4c66_a0d5_7e4dff81ed2e.slice. Jan 17 00:02:11.579068 kubelet[2566]: I0117 00:02:11.578981 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b664cb4-7f33-441a-9441-c3c6c39da172-config-volume\") pod \"coredns-668d6bf9bc-wvc2p\" (UID: \"4b664cb4-7f33-441a-9441-c3c6c39da172\") " pod="kube-system/coredns-668d6bf9bc-wvc2p" Jan 17 00:02:11.579353 kubelet[2566]: I0117 00:02:11.579297 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnvs\" (UniqueName: \"kubernetes.io/projected/4b664cb4-7f33-441a-9441-c3c6c39da172-kube-api-access-wxnvs\") pod \"coredns-668d6bf9bc-wvc2p\" (UID: \"4b664cb4-7f33-441a-9441-c3c6c39da172\") " pod="kube-system/coredns-668d6bf9bc-wvc2p" Jan 17 00:02:11.579552 kubelet[2566]: I0117 00:02:11.579431 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62492353-eeb3-4c66-a0d5-7e4dff81ed2e-config-volume\") pod \"coredns-668d6bf9bc-scpc2\" (UID: \"62492353-eeb3-4c66-a0d5-7e4dff81ed2e\") " pod="kube-system/coredns-668d6bf9bc-scpc2" Jan 17 00:02:11.579552 kubelet[2566]: I0117 00:02:11.579516 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbrj\" (UniqueName: \"kubernetes.io/projected/62492353-eeb3-4c66-a0d5-7e4dff81ed2e-kube-api-access-fgbrj\") pod \"coredns-668d6bf9bc-scpc2\" (UID: \"62492353-eeb3-4c66-a0d5-7e4dff81ed2e\") " pod="kube-system/coredns-668d6bf9bc-scpc2" Jan 17 00:02:11.793563 containerd[1486]: time="2026-01-17T00:02:11.792115731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wvc2p,Uid:4b664cb4-7f33-441a-9441-c3c6c39da172,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:11.803609 containerd[1486]: time="2026-01-17T00:02:11.803364018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scpc2,Uid:62492353-eeb3-4c66-a0d5-7e4dff81ed2e,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:14.299239 systemd-networkd[1372]: cilium_host: Link UP Jan 17 00:02:14.299369 systemd-networkd[1372]: cilium_net: Link UP Jan 17 00:02:14.299372 systemd-networkd[1372]: cilium_net: Gained carrier Jan 17 00:02:14.299503 systemd-networkd[1372]: cilium_host: Gained carrier Jan 17 00:02:14.302700 systemd-networkd[1372]: cilium_host: Gained IPv6LL Jan 17 00:02:14.428132 systemd-networkd[1372]: cilium_vxlan: Link UP Jan 17 00:02:14.428139 systemd-networkd[1372]: cilium_vxlan: Gained carrier Jan 17 00:02:14.734343 kernel: NET: Registered PF_ALG protocol family Jan 17 00:02:15.141123 systemd-networkd[1372]: cilium_net: Gained IPv6LL Jan 17 00:02:15.456484 systemd-networkd[1372]: lxc_health: Link UP Jan 17 00:02:15.467243 systemd-networkd[1372]: lxc_health: Gained carrier Jan 17 00:02:15.852985 systemd-networkd[1372]: lxc56e7f8e2afe6: Link UP Jan 17 00:02:15.866916 kernel: eth0: renamed from tmp69033 Jan 17 00:02:15.873369 systemd-networkd[1372]: lxc56e7f8e2afe6: Gained carrier Jan 17 00:02:15.880698 systemd-networkd[1372]: lxcd08393e7515e: Link UP Jan 17 00:02:15.888091 kernel: eth0: renamed from tmpd8d64 Jan 17 00:02:15.894768 systemd-networkd[1372]: lxcd08393e7515e: Gained carrier Jan 17 00:02:16.165595 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Jan 17 00:02:16.805507 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 17 00:02:16.848957 kubelet[2566]: I0117 00:02:16.847472 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nkkxv" podStartSLOduration=11.794064064 podStartE2EDuration="17.847449043s" podCreationTimestamp="2026-01-17 00:01:59 +0000 UTC" firstStartedPulling="2026-01-17 00:02:00.994531733 +0000 UTC m=+7.145810150" lastFinishedPulling="2026-01-17 00:02:07.047916712 +0000 UTC m=+13.199195129" observedRunningTime="2026-01-17 00:02:12.123529915 +0000 UTC m=+18.274808372" watchObservedRunningTime="2026-01-17 00:02:16.847449043 +0000 UTC m=+22.998727460" Jan 17 00:02:17.189308 systemd-networkd[1372]: lxcd08393e7515e: Gained IPv6LL Jan 17 00:02:17.317400 systemd-networkd[1372]: lxc56e7f8e2afe6: Gained IPv6LL Jan 17 00:02:19.775324 containerd[1486]: time="2026-01-17T00:02:19.774121508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:19.775324 containerd[1486]: time="2026-01-17T00:02:19.774176883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:19.775324 containerd[1486]: time="2026-01-17T00:02:19.774187245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:19.775324 containerd[1486]: time="2026-01-17T00:02:19.774299235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:19.809510 systemd[1]: Started cri-containerd-d8d645ae6a7485c3a2599c11cf60ec65678e541eea63315bfdf0009af54cef22.scope - libcontainer container d8d645ae6a7485c3a2599c11cf60ec65678e541eea63315bfdf0009af54cef22. Jan 17 00:02:19.849064 containerd[1486]: time="2026-01-17T00:02:19.848470072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:19.849064 containerd[1486]: time="2026-01-17T00:02:19.849033741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:19.849565 containerd[1486]: time="2026-01-17T00:02:19.849249757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:19.849565 containerd[1486]: time="2026-01-17T00:02:19.849434526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:19.871472 containerd[1486]: time="2026-01-17T00:02:19.871353026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scpc2,Uid:62492353-eeb3-4c66-a0d5-7e4dff81ed2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8d645ae6a7485c3a2599c11cf60ec65678e541eea63315bfdf0009af54cef22\"" Jan 17 00:02:19.882122 systemd[1]: Started cri-containerd-69033d7211f478c9ed9ebf688f9547e601c8baed970c0d4ff500aaa57ebebee7.scope - libcontainer container 69033d7211f478c9ed9ebf688f9547e601c8baed970c0d4ff500aaa57ebebee7. Jan 17 00:02:19.884275 containerd[1486]: time="2026-01-17T00:02:19.884238943Z" level=info msg="CreateContainer within sandbox \"d8d645ae6a7485c3a2599c11cf60ec65678e541eea63315bfdf0009af54cef22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:19.908182 containerd[1486]: time="2026-01-17T00:02:19.908127802Z" level=info msg="CreateContainer within sandbox \"d8d645ae6a7485c3a2599c11cf60ec65678e541eea63315bfdf0009af54cef22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"458805aa03a70cff11f61930197c50134e0ac21786fc41b1d842958e5015cad2\"" Jan 17 00:02:19.910002 containerd[1486]: time="2026-01-17T00:02:19.908691351Z" level=info msg="StartContainer for \"458805aa03a70cff11f61930197c50134e0ac21786fc41b1d842958e5015cad2\"" Jan 17 00:02:19.936173 systemd[1]: Started cri-containerd-458805aa03a70cff11f61930197c50134e0ac21786fc41b1d842958e5015cad2.scope - libcontainer container 458805aa03a70cff11f61930197c50134e0ac21786fc41b1d842958e5015cad2. Jan 17 00:02:19.962024 containerd[1486]: time="2026-01-17T00:02:19.961139260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wvc2p,Uid:4b664cb4-7f33-441a-9441-c3c6c39da172,Namespace:kube-system,Attempt:0,} returns sandbox id \"69033d7211f478c9ed9ebf688f9547e601c8baed970c0d4ff500aaa57ebebee7\"" Jan 17 00:02:19.981545 containerd[1486]: time="2026-01-17T00:02:19.981366833Z" level=info msg="CreateContainer within sandbox \"69033d7211f478c9ed9ebf688f9547e601c8baed970c0d4ff500aaa57ebebee7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:20.008141 containerd[1486]: time="2026-01-17T00:02:20.008096820Z" level=info msg="StartContainer for \"458805aa03a70cff11f61930197c50134e0ac21786fc41b1d842958e5015cad2\" returns successfully" Jan 17 00:02:20.009472 containerd[1486]: time="2026-01-17T00:02:20.009432120Z" level=info msg="CreateContainer within sandbox \"69033d7211f478c9ed9ebf688f9547e601c8baed970c0d4ff500aaa57ebebee7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd9edb83abb8462f3d216f8a5954aa79eb771096b383e9b33cbceaa6dad593a3\"" Jan 17 00:02:20.011022 containerd[1486]: time="2026-01-17T00:02:20.010974634Z" level=info msg="StartContainer for \"dd9edb83abb8462f3d216f8a5954aa79eb771096b383e9b33cbceaa6dad593a3\"" Jan 17 00:02:20.047394 systemd[1]: Started cri-containerd-dd9edb83abb8462f3d216f8a5954aa79eb771096b383e9b33cbceaa6dad593a3.scope - libcontainer container dd9edb83abb8462f3d216f8a5954aa79eb771096b383e9b33cbceaa6dad593a3. Jan 17 00:02:20.081944 containerd[1486]: time="2026-01-17T00:02:20.081877322Z" level=info msg="StartContainer for \"dd9edb83abb8462f3d216f8a5954aa79eb771096b383e9b33cbceaa6dad593a3\" returns successfully" Jan 17 00:02:20.154170 kubelet[2566]: I0117 00:02:20.153475 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wvc2p" podStartSLOduration=20.153454022 podStartE2EDuration="20.153454022s" podCreationTimestamp="2026-01-17 00:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:20.153182353 +0000 UTC m=+26.304460770" watchObservedRunningTime="2026-01-17 00:02:20.153454022 +0000 UTC m=+26.304732439" Jan 17 00:02:20.172611 kubelet[2566]: I0117 00:02:20.170927 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-scpc2" podStartSLOduration=20.17089023 podStartE2EDuration="20.17089023s" podCreationTimestamp="2026-01-17 00:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:20.170779642 +0000 UTC m=+26.322058139" watchObservedRunningTime="2026-01-17 00:02:20.17089023 +0000 UTC m=+26.322168647" Jan 17 00:04:13.027100 systemd[1]: Started sshd@7-188.245.80.168:22-4.153.228.146:39740.service - OpenSSH per-connection server daemon (4.153.228.146:39740). Jan 17 00:04:13.622600 sshd[3967]: Accepted publickey for core from 4.153.228.146 port 39740 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:13.625941 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:13.632179 systemd-logind[1456]: New session 8 of user core. Jan 17 00:04:13.646243 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:04:14.139220 sshd[3967]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:14.144512 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:04:14.144636 systemd[1]: sshd@7-188.245.80.168:22-4.153.228.146:39740.service: Deactivated successfully. Jan 17 00:04:14.147761 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:04:14.151046 systemd-logind[1456]: Removed session 8. Jan 17 00:04:19.254260 systemd[1]: Started sshd@8-188.245.80.168:22-4.153.228.146:38732.service - OpenSSH per-connection server daemon (4.153.228.146:38732). Jan 17 00:04:19.862110 sshd[3981]: Accepted publickey for core from 4.153.228.146 port 38732 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:19.866028 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:19.871967 systemd-logind[1456]: New session 9 of user core. Jan 17 00:04:19.883244 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:04:20.366960 sshd[3981]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:20.370777 systemd[1]: sshd@8-188.245.80.168:22-4.153.228.146:38732.service: Deactivated successfully. Jan 17 00:04:20.373033 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:04:20.375187 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:04:20.376664 systemd-logind[1456]: Removed session 9. Jan 17 00:04:25.477445 systemd[1]: Started sshd@9-188.245.80.168:22-4.153.228.146:43326.service - OpenSSH per-connection server daemon (4.153.228.146:43326). Jan 17 00:04:26.071845 sshd[3995]: Accepted publickey for core from 4.153.228.146 port 43326 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:26.074588 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:26.080519 systemd-logind[1456]: New session 10 of user core. Jan 17 00:04:26.087229 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:04:26.568041 sshd[3995]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:26.574999 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:04:26.575388 systemd[1]: sshd@9-188.245.80.168:22-4.153.228.146:43326.service: Deactivated successfully. Jan 17 00:04:26.578366 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:04:26.580655 systemd-logind[1456]: Removed session 10. Jan 17 00:04:26.682577 systemd[1]: Started sshd@10-188.245.80.168:22-4.153.228.146:43330.service - OpenSSH per-connection server daemon (4.153.228.146:43330). Jan 17 00:04:27.280473 sshd[4009]: Accepted publickey for core from 4.153.228.146 port 43330 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:27.283230 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:27.289551 systemd-logind[1456]: New session 11 of user core. Jan 17 00:04:27.295129 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:04:27.817279 sshd[4009]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:27.822891 systemd[1]: sshd@10-188.245.80.168:22-4.153.228.146:43330.service: Deactivated successfully. Jan 17 00:04:27.826648 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:04:27.828632 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:04:27.829794 systemd-logind[1456]: Removed session 11. Jan 17 00:04:27.932925 systemd[1]: Started sshd@11-188.245.80.168:22-4.153.228.146:43344.service - OpenSSH per-connection server daemon (4.153.228.146:43344). Jan 17 00:04:28.537792 sshd[4019]: Accepted publickey for core from 4.153.228.146 port 43344 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:28.541583 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:28.549040 systemd-logind[1456]: New session 12 of user core. Jan 17 00:04:28.553111 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:04:29.039000 sshd[4019]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:29.050103 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:04:29.050971 systemd[1]: sshd@11-188.245.80.168:22-4.153.228.146:43344.service: Deactivated successfully. Jan 17 00:04:29.056291 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:04:29.057853 systemd-logind[1456]: Removed session 12. Jan 17 00:04:34.158599 systemd[1]: Started sshd@12-188.245.80.168:22-4.153.228.146:43354.service - OpenSSH per-connection server daemon (4.153.228.146:43354). Jan 17 00:04:34.773275 sshd[4034]: Accepted publickey for core from 4.153.228.146 port 43354 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:34.776415 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:34.783116 systemd-logind[1456]: New session 13 of user core. Jan 17 00:04:34.791257 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:04:35.277544 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:35.281781 systemd[1]: sshd@12-188.245.80.168:22-4.153.228.146:43354.service: Deactivated successfully. Jan 17 00:04:35.284161 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:04:35.286423 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:04:35.287513 systemd-logind[1456]: Removed session 13. Jan 17 00:04:40.393259 systemd[1]: Started sshd@13-188.245.80.168:22-4.153.228.146:50032.service - OpenSSH per-connection server daemon (4.153.228.146:50032). Jan 17 00:04:40.999237 sshd[4047]: Accepted publickey for core from 4.153.228.146 port 50032 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:41.001153 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:41.008342 systemd-logind[1456]: New session 14 of user core. Jan 17 00:04:41.018275 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:04:41.502269 sshd[4047]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:41.507075 systemd[1]: sshd@13-188.245.80.168:22-4.153.228.146:50032.service: Deactivated successfully. Jan 17 00:04:41.510485 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:04:41.512562 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:04:41.514089 systemd-logind[1456]: Removed session 14. Jan 17 00:04:41.622130 systemd[1]: Started sshd@14-188.245.80.168:22-4.153.228.146:50048.service - OpenSSH per-connection server daemon (4.153.228.146:50048). Jan 17 00:04:42.238569 sshd[4060]: Accepted publickey for core from 4.153.228.146 port 50048 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:42.240726 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:42.247175 systemd-logind[1456]: New session 15 of user core. Jan 17 00:04:42.254225 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:04:42.819333 sshd[4060]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:42.825149 systemd[1]: sshd@14-188.245.80.168:22-4.153.228.146:50048.service: Deactivated successfully. Jan 17 00:04:42.828383 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:04:42.829585 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:04:42.830816 systemd-logind[1456]: Removed session 15. Jan 17 00:04:42.934441 systemd[1]: Started sshd@15-188.245.80.168:22-4.153.228.146:50062.service - OpenSSH per-connection server daemon (4.153.228.146:50062). Jan 17 00:04:43.549680 sshd[4070]: Accepted publickey for core from 4.153.228.146 port 50062 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:43.551430 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:43.558021 systemd-logind[1456]: New session 16 of user core. Jan 17 00:04:43.565242 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:04:44.669410 sshd[4070]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:44.675470 systemd[1]: sshd@15-188.245.80.168:22-4.153.228.146:50062.service: Deactivated successfully. Jan 17 00:04:44.678226 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:04:44.680836 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:04:44.682051 systemd-logind[1456]: Removed session 16. Jan 17 00:04:44.794433 systemd[1]: Started sshd@16-188.245.80.168:22-4.153.228.146:38034.service - OpenSSH per-connection server daemon (4.153.228.146:38034). Jan 17 00:04:45.443531 sshd[4088]: Accepted publickey for core from 4.153.228.146 port 38034 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:45.445929 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:45.450988 systemd-logind[1456]: New session 17 of user core. Jan 17 00:04:45.462653 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:04:46.083803 sshd[4088]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:46.088270 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:04:46.089461 systemd[1]: sshd@16-188.245.80.168:22-4.153.228.146:38034.service: Deactivated successfully. Jan 17 00:04:46.092867 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:04:46.095180 systemd-logind[1456]: Removed session 17. Jan 17 00:04:46.195376 systemd[1]: Started sshd@17-188.245.80.168:22-4.153.228.146:38046.service - OpenSSH per-connection server daemon (4.153.228.146:38046). Jan 17 00:04:46.791639 sshd[4099]: Accepted publickey for core from 4.153.228.146 port 38046 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:46.793992 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:46.799658 systemd-logind[1456]: New session 18 of user core. Jan 17 00:04:46.804143 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:04:47.285577 sshd[4099]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:47.290058 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:04:47.290430 systemd[1]: sshd@17-188.245.80.168:22-4.153.228.146:38046.service: Deactivated successfully. Jan 17 00:04:47.294519 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:04:47.298882 systemd-logind[1456]: Removed session 18. Jan 17 00:04:52.401454 systemd[1]: Started sshd@18-188.245.80.168:22-4.153.228.146:38060.service - OpenSSH per-connection server daemon (4.153.228.146:38060). Jan 17 00:04:53.008372 sshd[4114]: Accepted publickey for core from 4.153.228.146 port 38060 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:53.010647 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:53.017461 systemd-logind[1456]: New session 19 of user core. Jan 17 00:04:53.024260 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:04:53.511823 sshd[4114]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:53.517509 systemd[1]: sshd@18-188.245.80.168:22-4.153.228.146:38060.service: Deactivated successfully. Jan 17 00:04:53.519864 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:04:53.521171 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:04:53.522353 systemd-logind[1456]: Removed session 19. Jan 17 00:04:58.622195 systemd[1]: Started sshd@19-188.245.80.168:22-4.153.228.146:59632.service - OpenSSH per-connection server daemon (4.153.228.146:59632). Jan 17 00:04:59.218747 sshd[4129]: Accepted publickey for core from 4.153.228.146 port 59632 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:59.221165 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:59.226681 systemd-logind[1456]: New session 20 of user core. Jan 17 00:04:59.231225 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:04:59.719665 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:59.729431 systemd[1]: sshd@19-188.245.80.168:22-4.153.228.146:59632.service: Deactivated successfully. Jan 17 00:04:59.733576 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:04:59.734841 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:04:59.736493 systemd-logind[1456]: Removed session 20. Jan 17 00:05:04.838072 systemd[1]: Started sshd@20-188.245.80.168:22-4.153.228.146:43942.service - OpenSSH per-connection server daemon (4.153.228.146:43942). Jan 17 00:05:05.438718 sshd[4144]: Accepted publickey for core from 4.153.228.146 port 43942 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:05.441168 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:05.446694 systemd-logind[1456]: New session 21 of user core. Jan 17 00:05:05.456199 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:05:05.934590 sshd[4144]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:05.940446 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:05:05.941080 systemd[1]: sshd@20-188.245.80.168:22-4.153.228.146:43942.service: Deactivated successfully. Jan 17 00:05:05.944374 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:05:05.947237 systemd-logind[1456]: Removed session 21. Jan 17 00:05:06.048312 systemd[1]: Started sshd@21-188.245.80.168:22-4.153.228.146:43944.service - OpenSSH per-connection server daemon (4.153.228.146:43944). Jan 17 00:05:06.663718 sshd[4157]: Accepted publickey for core from 4.153.228.146 port 43944 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:06.665777 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:06.671792 systemd-logind[1456]: New session 22 of user core. Jan 17 00:05:06.681327 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:05:08.958317 containerd[1486]: time="2026-01-17T00:05:08.958219820Z" level=info msg="StopContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" with timeout 30 (s)" Jan 17 00:05:08.960552 containerd[1486]: time="2026-01-17T00:05:08.960486599Z" level=info msg="Stop container \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" with signal terminated" Jan 17 00:05:08.973205 containerd[1486]: time="2026-01-17T00:05:08.973153172Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:05:08.976139 systemd[1]: cri-containerd-796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a.scope: Deactivated successfully. Jan 17 00:05:08.985729 containerd[1486]: time="2026-01-17T00:05:08.985688353Z" level=info msg="StopContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" with timeout 2 (s)" Jan 17 00:05:08.986163 containerd[1486]: time="2026-01-17T00:05:08.986131646Z" level=info msg="Stop container \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" with signal terminated" Jan 17 00:05:08.996059 systemd-networkd[1372]: lxc_health: Link DOWN Jan 17 00:05:08.996068 systemd-networkd[1372]: lxc_health: Lost carrier Jan 17 00:05:09.020093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a-rootfs.mount: Deactivated successfully. Jan 17 00:05:09.023984 systemd[1]: cri-containerd-47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4.scope: Deactivated successfully. Jan 17 00:05:09.024262 systemd[1]: cri-containerd-47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4.scope: Consumed 7.340s CPU time. Jan 17 00:05:09.036882 containerd[1486]: time="2026-01-17T00:05:09.036820533Z" level=info msg="shim disconnected" id=796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a namespace=k8s.io Jan 17 00:05:09.036882 containerd[1486]: time="2026-01-17T00:05:09.036871170Z" level=warning msg="cleaning up after shim disconnected" id=796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a namespace=k8s.io Jan 17 00:05:09.036882 containerd[1486]: time="2026-01-17T00:05:09.036880729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:09.053311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4-rootfs.mount: Deactivated successfully. Jan 17 00:05:09.060892 containerd[1486]: time="2026-01-17T00:05:09.060809255Z" level=info msg="shim disconnected" id=47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4 namespace=k8s.io Jan 17 00:05:09.060892 containerd[1486]: time="2026-01-17T00:05:09.060878851Z" level=warning msg="cleaning up after shim disconnected" id=47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4 namespace=k8s.io Jan 17 00:05:09.060892 containerd[1486]: time="2026-01-17T00:05:09.060888210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:09.069796 containerd[1486]: time="2026-01-17T00:05:09.069542292Z" level=info msg="StopContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" returns successfully" Jan 17 00:05:09.071653 containerd[1486]: time="2026-01-17T00:05:09.070689783Z" level=info msg="StopPodSandbox for \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\"" Jan 17 00:05:09.071653 containerd[1486]: time="2026-01-17T00:05:09.070754779Z" level=info msg="Container to stop \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.075639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa-shm.mount: Deactivated successfully. Jan 17 00:05:09.092774 systemd[1]: cri-containerd-e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa.scope: Deactivated successfully. Jan 17 00:05:09.094456 containerd[1486]: time="2026-01-17T00:05:09.093698084Z" level=info msg="StopContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" returns successfully" Jan 17 00:05:09.095105 containerd[1486]: time="2026-01-17T00:05:09.095005525Z" level=info msg="StopPodSandbox for \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\"" Jan 17 00:05:09.095292 containerd[1486]: time="2026-01-17T00:05:09.095245831Z" level=info msg="Container to stop \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.095380 containerd[1486]: time="2026-01-17T00:05:09.095365624Z" level=info msg="Container to stop \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.095452 containerd[1486]: time="2026-01-17T00:05:09.095423820Z" level=info msg="Container to stop \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.095607 containerd[1486]: time="2026-01-17T00:05:09.095495096Z" level=info msg="Container to stop \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.095607 containerd[1486]: time="2026-01-17T00:05:09.095521215Z" level=info msg="Container to stop \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:09.103272 systemd[1]: cri-containerd-f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73.scope: Deactivated successfully. Jan 17 00:05:09.117416 kubelet[2566]: E0117 00:05:09.117282 2566 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:05:09.131856 containerd[1486]: time="2026-01-17T00:05:09.131245633Z" level=info msg="shim disconnected" id=e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa namespace=k8s.io Jan 17 00:05:09.131856 containerd[1486]: time="2026-01-17T00:05:09.131605412Z" level=warning msg="cleaning up after shim disconnected" id=e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa namespace=k8s.io Jan 17 00:05:09.131856 containerd[1486]: time="2026-01-17T00:05:09.131619291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:09.132188 containerd[1486]: time="2026-01-17T00:05:09.131439582Z" level=info msg="shim disconnected" id=f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73 namespace=k8s.io Jan 17 00:05:09.132188 containerd[1486]: time="2026-01-17T00:05:09.131950151Z" level=warning msg="cleaning up after shim disconnected" id=f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73 namespace=k8s.io Jan 17 00:05:09.132188 containerd[1486]: time="2026-01-17T00:05:09.131957671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:09.149343 containerd[1486]: time="2026-01-17T00:05:09.149043847Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:05:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:05:09.149778 containerd[1486]: time="2026-01-17T00:05:09.149622172Z" level=info msg="TearDown network for sandbox \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\" successfully" Jan 17 00:05:09.149778 containerd[1486]: time="2026-01-17T00:05:09.149658610Z" level=info msg="StopPodSandbox for \"e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa\" returns successfully" Jan 17 00:05:09.150619 containerd[1486]: time="2026-01-17T00:05:09.150282013Z" level=info msg="TearDown network for sandbox \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" successfully" Jan 17 00:05:09.150619 containerd[1486]: time="2026-01-17T00:05:09.150346169Z" level=info msg="StopPodSandbox for \"f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73\" returns successfully" Jan 17 00:05:09.327483 kubelet[2566]: I0117 00:05:09.327277 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hubble-tls\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327483 kubelet[2566]: I0117 00:05:09.327396 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cni-path\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327483 kubelet[2566]: I0117 00:05:09.327441 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0fc02e7-6d94-4715-b1f0-228803910e8d-cilium-config-path\") pod \"e0fc02e7-6d94-4715-b1f0-228803910e8d\" (UID: \"e0fc02e7-6d94-4715-b1f0-228803910e8d\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327499 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-kernel\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327537 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-etc-cni-netd\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327573 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8jlh\" (UniqueName: \"kubernetes.io/projected/e0fc02e7-6d94-4715-b1f0-228803910e8d-kube-api-access-s8jlh\") pod \"e0fc02e7-6d94-4715-b1f0-228803910e8d\" (UID: \"e0fc02e7-6d94-4715-b1f0-228803910e8d\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327606 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-cgroup\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327637 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-lib-modules\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.327787 kubelet[2566]: I0117 00:05:09.327675 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85sr9\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327717 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-config-path\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327754 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-clustermesh-secrets\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327786 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-net\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327820 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-run\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327854 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-bpf-maps\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328180 kubelet[2566]: I0117 00:05:09.327889 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-xtables-lock\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328534 kubelet[2566]: I0117 00:05:09.327959 2566 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hostproc\") pod \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\" (UID: \"8d2c870c-48d3-4bc7-9d48-5071db9f73bc\") " Jan 17 00:05:09.328534 kubelet[2566]: I0117 00:05:09.328096 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.328534 kubelet[2566]: I0117 00:05:09.328155 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.334952 kubelet[2566]: I0117 00:05:09.332736 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:09.334952 kubelet[2566]: I0117 00:05:09.334656 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fc02e7-6d94-4715-b1f0-228803910e8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0fc02e7-6d94-4715-b1f0-228803910e8d" (UID: "e0fc02e7-6d94-4715-b1f0-228803910e8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:05:09.335262 kubelet[2566]: I0117 00:05:09.335231 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.335369 kubelet[2566]: I0117 00:05:09.335271 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.336362 kubelet[2566]: I0117 00:05:09.336336 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.336617 kubelet[2566]: I0117 00:05:09.336470 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.336702 kubelet[2566]: I0117 00:05:09.336478 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.336753 kubelet[2566]: I0117 00:05:09.336492 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.337003 kubelet[2566]: I0117 00:05:09.336503 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.337086 kubelet[2566]: I0117 00:05:09.336513 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:09.337147 kubelet[2566]: I0117 00:05:09.336570 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9" (OuterVolumeSpecName: "kube-api-access-85sr9") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "kube-api-access-85sr9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:09.338965 kubelet[2566]: I0117 00:05:09.338935 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:05:09.340565 kubelet[2566]: I0117 00:05:09.340501 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fc02e7-6d94-4715-b1f0-228803910e8d-kube-api-access-s8jlh" (OuterVolumeSpecName: "kube-api-access-s8jlh") pod "e0fc02e7-6d94-4715-b1f0-228803910e8d" (UID: "e0fc02e7-6d94-4715-b1f0-228803910e8d"). InnerVolumeSpecName "kube-api-access-s8jlh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:09.340845 kubelet[2566]: I0117 00:05:09.340808 2566 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d2c870c-48d3-4bc7-9d48-5071db9f73bc" (UID: "8d2c870c-48d3-4bc7-9d48-5071db9f73bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428183 2566 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-clustermesh-secrets\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428234 2566 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-config-path\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428251 2566 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-net\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428263 2566 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-run\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428276 2566 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-bpf-maps\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428295 2566 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-xtables-lock\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428306 2566 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hostproc\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428271 kubelet[2566]: I0117 00:05:09.428316 2566 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-hubble-tls\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428326 2566 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cni-path\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428336 2566 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0fc02e7-6d94-4715-b1f0-228803910e8d-cilium-config-path\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428346 2566 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-etc-cni-netd\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428356 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8jlh\" (UniqueName: \"kubernetes.io/projected/e0fc02e7-6d94-4715-b1f0-228803910e8d-kube-api-access-s8jlh\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428366 2566 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428378 2566 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-85sr9\" (UniqueName: \"kubernetes.io/projected/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-kube-api-access-85sr9\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428389 2566 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-cilium-cgroup\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.428610 kubelet[2566]: I0117 00:05:09.428399 2566 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d2c870c-48d3-4bc7-9d48-5071db9f73bc-lib-modules\") on node \"ci-4081-3-6-n-5d990e87a1\" DevicePath \"\"" Jan 17 00:05:09.557425 kubelet[2566]: I0117 00:05:09.557384 2566 scope.go:117] "RemoveContainer" containerID="796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a" Jan 17 00:05:09.561103 containerd[1486]: time="2026-01-17T00:05:09.561048074Z" level=info msg="RemoveContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\"" Jan 17 00:05:09.567758 containerd[1486]: time="2026-01-17T00:05:09.567699995Z" level=info msg="RemoveContainer for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" returns successfully" Jan 17 00:05:09.569448 kubelet[2566]: I0117 00:05:09.569044 2566 scope.go:117] "RemoveContainer" containerID="796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a" Jan 17 00:05:09.572356 containerd[1486]: time="2026-01-17T00:05:09.570788890Z" level=error msg="ContainerStatus for \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\": not found" Jan 17 00:05:09.571698 systemd[1]: Removed slice kubepods-burstable-pod8d2c870c_48d3_4bc7_9d48_5071db9f73bc.slice - libcontainer container kubepods-burstable-pod8d2c870c_48d3_4bc7_9d48_5071db9f73bc.slice. Jan 17 00:05:09.571799 systemd[1]: kubepods-burstable-pod8d2c870c_48d3_4bc7_9d48_5071db9f73bc.slice: Consumed 7.427s CPU time. Jan 17 00:05:09.573964 kubelet[2566]: E0117 00:05:09.572994 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\": not found" containerID="796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a" Jan 17 00:05:09.573964 kubelet[2566]: I0117 00:05:09.573041 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a"} err="failed to get container status \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"796630ba43198d9d36dcafd0d932bcc72d4d8b9c4a8f52f8b5c82cabfa6b8f9a\": not found" Jan 17 00:05:09.573964 kubelet[2566]: I0117 00:05:09.573162 2566 scope.go:117] "RemoveContainer" containerID="47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4" Jan 17 00:05:09.575266 containerd[1486]: time="2026-01-17T00:05:09.575070193Z" level=info msg="RemoveContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\"" Jan 17 00:05:09.577328 systemd[1]: Removed slice kubepods-besteffort-pode0fc02e7_6d94_4715_b1f0_228803910e8d.slice - libcontainer container kubepods-besteffort-pode0fc02e7_6d94_4715_b1f0_228803910e8d.slice. Jan 17 00:05:09.583958 containerd[1486]: time="2026-01-17T00:05:09.583893984Z" level=info msg="RemoveContainer for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" returns successfully" Jan 17 00:05:09.584295 kubelet[2566]: I0117 00:05:09.584251 2566 scope.go:117] "RemoveContainer" containerID="163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8" Jan 17 00:05:09.587536 containerd[1486]: time="2026-01-17T00:05:09.587052475Z" level=info msg="RemoveContainer for \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\"" Jan 17 00:05:09.591748 containerd[1486]: time="2026-01-17T00:05:09.591703276Z" level=info msg="RemoveContainer for \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\" returns successfully" Jan 17 00:05:09.592266 kubelet[2566]: I0117 00:05:09.592133 2566 scope.go:117] "RemoveContainer" containerID="6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1" Jan 17 00:05:09.595751 containerd[1486]: time="2026-01-17T00:05:09.595692997Z" level=info msg="RemoveContainer for \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\"" Jan 17 00:05:09.599832 containerd[1486]: time="2026-01-17T00:05:09.599742634Z" level=info msg="RemoveContainer for \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\" returns successfully" Jan 17 00:05:09.600305 kubelet[2566]: I0117 00:05:09.600229 2566 scope.go:117] "RemoveContainer" containerID="bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165" Jan 17 00:05:09.602078 containerd[1486]: time="2026-01-17T00:05:09.601834829Z" level=info msg="RemoveContainer for \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\"" Jan 17 00:05:09.606975 containerd[1486]: time="2026-01-17T00:05:09.606866127Z" level=info msg="RemoveContainer for \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\" returns successfully" Jan 17 00:05:09.608084 kubelet[2566]: I0117 00:05:09.608048 2566 scope.go:117] "RemoveContainer" containerID="ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe" Jan 17 00:05:09.609935 containerd[1486]: time="2026-01-17T00:05:09.609667600Z" level=info msg="RemoveContainer for \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\"" Jan 17 00:05:09.612844 containerd[1486]: time="2026-01-17T00:05:09.612804052Z" level=info msg="RemoveContainer for \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\" returns successfully" Jan 17 00:05:09.613384 kubelet[2566]: I0117 00:05:09.613304 2566 scope.go:117] "RemoveContainer" containerID="47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4" Jan 17 00:05:09.613683 containerd[1486]: time="2026-01-17T00:05:09.613651081Z" level=error msg="ContainerStatus for \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\": not found" Jan 17 00:05:09.614032 kubelet[2566]: E0117 00:05:09.613813 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\": not found" containerID="47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4" Jan 17 00:05:09.614032 kubelet[2566]: I0117 00:05:09.613888 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4"} err="failed to get container status \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"47bef6807774883cd41ddf780469733b6b2752f3d8c74680914d6442a9a781e4\": not found" Jan 17 00:05:09.614032 kubelet[2566]: I0117 00:05:09.613956 2566 scope.go:117] "RemoveContainer" containerID="163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8" Jan 17 00:05:09.614172 containerd[1486]: time="2026-01-17T00:05:09.614122973Z" level=error msg="ContainerStatus for \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\": not found" Jan 17 00:05:09.614607 kubelet[2566]: E0117 00:05:09.614397 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\": not found" containerID="163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8" Jan 17 00:05:09.614607 kubelet[2566]: I0117 00:05:09.614421 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8"} err="failed to get container status \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"163fdc30e7f0bd2171423db92c9e991428fee33e155d3d6c996ffdfa072a6fc8\": not found" Jan 17 00:05:09.614607 kubelet[2566]: I0117 00:05:09.614453 2566 scope.go:117] "RemoveContainer" containerID="6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1" Jan 17 00:05:09.616293 containerd[1486]: time="2026-01-17T00:05:09.615971342Z" level=error msg="ContainerStatus for \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\": not found" Jan 17 00:05:09.616561 kubelet[2566]: E0117 00:05:09.616494 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\": not found" containerID="6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1" Jan 17 00:05:09.616716 kubelet[2566]: I0117 00:05:09.616656 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1"} err="failed to get container status \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b92a7b2fe42405bd95e7e1d8922c8e2a283f0ee56f4e5129e82306dbccba9d1\": not found" Jan 17 00:05:09.617090 kubelet[2566]: I0117 00:05:09.616865 2566 scope.go:117] "RemoveContainer" containerID="bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165" Jan 17 00:05:09.617168 containerd[1486]: time="2026-01-17T00:05:09.617102674Z" level=error msg="ContainerStatus for \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\": not found" Jan 17 00:05:09.617695 kubelet[2566]: E0117 00:05:09.617433 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\": not found" containerID="bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165" Jan 17 00:05:09.617695 kubelet[2566]: I0117 00:05:09.617564 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165"} err="failed to get container status \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd184a779632a7687789f3b299e81367d12f594b479d67b92d89abba566f7165\": not found" Jan 17 00:05:09.617695 kubelet[2566]: I0117 00:05:09.617582 2566 scope.go:117] "RemoveContainer" containerID="ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe" Jan 17 00:05:09.618348 containerd[1486]: time="2026-01-17T00:05:09.618158371Z" level=error msg="ContainerStatus for \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\": not found" Jan 17 00:05:09.618639 kubelet[2566]: E0117 00:05:09.618585 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\": not found" containerID="ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe" Jan 17 00:05:09.618639 kubelet[2566]: I0117 00:05:09.618608 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe"} err="failed to get container status \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddfe3e18823e1ab625e3ba124cd5728b83bbe12960f84ae0bcf186991dcca6fe\": not found" Jan 17 00:05:09.953141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2c9991691ffba1dc9f6c7a7b21cb66d63a8c6f2aa219304a57367388fd1d0fa-rootfs.mount: Deactivated successfully. Jan 17 00:05:09.953572 systemd[1]: var-lib-kubelet-pods-e0fc02e7\x2d6d94\x2d4715\x2db1f0\x2d228803910e8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8jlh.mount: Deactivated successfully. Jan 17 00:05:09.953650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73-rootfs.mount: Deactivated successfully. Jan 17 00:05:09.953708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f61713d53697fdd47a9508ddc66b68c75bac0703aa0fccab8972a99089a7fe73-shm.mount: Deactivated successfully. Jan 17 00:05:09.953764 systemd[1]: var-lib-kubelet-pods-8d2c870c\x2d48d3\x2d4bc7\x2d9d48\x2d5071db9f73bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85sr9.mount: Deactivated successfully. Jan 17 00:05:09.953823 systemd[1]: var-lib-kubelet-pods-8d2c870c\x2d48d3\x2d4bc7\x2d9d48\x2d5071db9f73bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:05:09.953882 systemd[1]: var-lib-kubelet-pods-8d2c870c\x2d48d3\x2d4bc7\x2d9d48\x2d5071db9f73bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:05:09.964967 kubelet[2566]: I0117 00:05:09.963747 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2c870c-48d3-4bc7-9d48-5071db9f73bc" path="/var/lib/kubelet/pods/8d2c870c-48d3-4bc7-9d48-5071db9f73bc/volumes" Jan 17 00:05:09.964967 kubelet[2566]: I0117 00:05:09.964678 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fc02e7-6d94-4715-b1f0-228803910e8d" path="/var/lib/kubelet/pods/e0fc02e7-6d94-4715-b1f0-228803910e8d/volumes" Jan 17 00:05:10.979566 sshd[4157]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:10.984776 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:05:10.986142 systemd[1]: sshd@21-188.245.80.168:22-4.153.228.146:43944.service: Deactivated successfully. Jan 17 00:05:10.989806 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:05:10.990022 systemd[1]: session-22.scope: Consumed 1.305s CPU time. Jan 17 00:05:10.991271 systemd-logind[1456]: Removed session 22. Jan 17 00:05:11.096345 systemd[1]: Started sshd@22-188.245.80.168:22-4.153.228.146:43958.service - OpenSSH per-connection server daemon (4.153.228.146:43958). Jan 17 00:05:11.726582 sshd[4319]: Accepted publickey for core from 4.153.228.146 port 43958 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:11.728836 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:11.737689 systemd-logind[1456]: New session 23 of user core. Jan 17 00:05:11.744294 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:05:12.828476 kubelet[2566]: I0117 00:05:12.828421 2566 memory_manager.go:355] "RemoveStaleState removing state" podUID="e0fc02e7-6d94-4715-b1f0-228803910e8d" containerName="cilium-operator" Jan 17 00:05:12.828476 kubelet[2566]: I0117 00:05:12.828462 2566 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d2c870c-48d3-4bc7-9d48-5071db9f73bc" containerName="cilium-agent" Jan 17 00:05:12.838347 systemd[1]: Created slice kubepods-burstable-pod8e63d458_2733_4c50_af94_8fcceb0a2a14.slice - libcontainer container kubepods-burstable-pod8e63d458_2733_4c50_af94_8fcceb0a2a14.slice. Jan 17 00:05:12.896195 sshd[4319]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:12.901290 systemd[1]: sshd@22-188.245.80.168:22-4.153.228.146:43958.service: Deactivated successfully. Jan 17 00:05:12.904160 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:05:12.905896 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:05:12.909677 systemd-logind[1456]: Removed session 23. Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950314 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-bpf-maps\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950391 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-host-proc-sys-net\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950413 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-hostproc\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950444 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-cilium-cgroup\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950463 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-lib-modules\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.950969 kubelet[2566]: I0117 00:05:12.950478 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-cni-path\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950543 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e63d458-2733-4c50-af94-8fcceb0a2a14-cilium-config-path\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950564 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-etc-cni-netd\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950740 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e63d458-2733-4c50-af94-8fcceb0a2a14-clustermesh-secrets\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950759 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-host-proc-sys-kernel\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950789 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e63d458-2733-4c50-af94-8fcceb0a2a14-hubble-tls\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951294 kubelet[2566]: I0117 00:05:12.950814 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-xtables-lock\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951421 kubelet[2566]: I0117 00:05:12.950828 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e63d458-2733-4c50-af94-8fcceb0a2a14-cilium-ipsec-secrets\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951421 kubelet[2566]: I0117 00:05:12.950843 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e63d458-2733-4c50-af94-8fcceb0a2a14-cilium-run\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:12.951421 kubelet[2566]: I0117 00:05:12.950866 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x886\" (UniqueName: \"kubernetes.io/projected/8e63d458-2733-4c50-af94-8fcceb0a2a14-kube-api-access-7x886\") pod \"cilium-nxjbt\" (UID: \"8e63d458-2733-4c50-af94-8fcceb0a2a14\") " pod="kube-system/cilium-nxjbt" Jan 17 00:05:13.003406 systemd[1]: Started sshd@23-188.245.80.168:22-4.153.228.146:43974.service - OpenSSH per-connection server daemon (4.153.228.146:43974). Jan 17 00:05:13.146678 containerd[1486]: time="2026-01-17T00:05:13.146492254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxjbt,Uid:8e63d458-2733-4c50-af94-8fcceb0a2a14,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:13.170882 containerd[1486]: time="2026-01-17T00:05:13.170651171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:13.170882 containerd[1486]: time="2026-01-17T00:05:13.170715768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:13.170882 containerd[1486]: time="2026-01-17T00:05:13.170745486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:13.170882 containerd[1486]: time="2026-01-17T00:05:13.170841281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:13.189162 systemd[1]: Started cri-containerd-c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad.scope - libcontainer container c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad. Jan 17 00:05:13.215179 containerd[1486]: time="2026-01-17T00:05:13.215026808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxjbt,Uid:8e63d458-2733-4c50-af94-8fcceb0a2a14,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\"" Jan 17 00:05:13.218166 containerd[1486]: time="2026-01-17T00:05:13.218109169Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:05:13.233792 containerd[1486]: time="2026-01-17T00:05:13.233733045Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae\"" Jan 17 00:05:13.234472 containerd[1486]: time="2026-01-17T00:05:13.234394771Z" level=info msg="StartContainer for \"0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae\"" Jan 17 00:05:13.262176 systemd[1]: Started cri-containerd-0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae.scope - libcontainer container 0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae. Jan 17 00:05:13.288659 containerd[1486]: time="2026-01-17T00:05:13.288611981Z" level=info msg="StartContainer for \"0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae\" returns successfully" Jan 17 00:05:13.299512 systemd[1]: cri-containerd-0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae.scope: Deactivated successfully. Jan 17 00:05:13.332976 containerd[1486]: time="2026-01-17T00:05:13.332815066Z" level=info msg="shim disconnected" id=0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae namespace=k8s.io Jan 17 00:05:13.332976 containerd[1486]: time="2026-01-17T00:05:13.332973938Z" level=warning msg="cleaning up after shim disconnected" id=0516aaea868ece8109ea1c239b6f84386bb71031dbb5de77db8e4d8cb4cd87ae namespace=k8s.io Jan 17 00:05:13.333232 containerd[1486]: time="2026-01-17T00:05:13.332988617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:13.584249 containerd[1486]: time="2026-01-17T00:05:13.584203050Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:05:13.594631 containerd[1486]: time="2026-01-17T00:05:13.594496880Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0\"" Jan 17 00:05:13.596239 containerd[1486]: time="2026-01-17T00:05:13.596170834Z" level=info msg="StartContainer for \"7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0\"" Jan 17 00:05:13.613637 sshd[4332]: Accepted publickey for core from 4.153.228.146 port 43974 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:13.615591 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:13.621247 systemd-logind[1456]: New session 24 of user core. Jan 17 00:05:13.630195 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:05:13.642198 systemd[1]: Started cri-containerd-7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0.scope - libcontainer container 7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0. Jan 17 00:05:13.669485 containerd[1486]: time="2026-01-17T00:05:13.669433144Z" level=info msg="StartContainer for \"7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0\" returns successfully" Jan 17 00:05:13.677362 systemd[1]: cri-containerd-7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0.scope: Deactivated successfully. Jan 17 00:05:13.699339 containerd[1486]: time="2026-01-17T00:05:13.699276448Z" level=info msg="shim disconnected" id=7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0 namespace=k8s.io Jan 17 00:05:13.699339 containerd[1486]: time="2026-01-17T00:05:13.699336525Z" level=warning msg="cleaning up after shim disconnected" id=7f1cd3b12121bf19d600c1d2d82f8ea81038af1ba7b5329e4381a4143f4cd3b0 namespace=k8s.io Jan 17 00:05:13.699339 containerd[1486]: time="2026-01-17T00:05:13.699347485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:14.039713 sshd[4332]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:14.047226 systemd[1]: sshd@23-188.245.80.168:22-4.153.228.146:43974.service: Deactivated successfully. Jan 17 00:05:14.050231 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:05:14.052241 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:05:14.053511 systemd-logind[1456]: Removed session 24. Jan 17 00:05:14.119769 kubelet[2566]: E0117 00:05:14.119432 2566 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:05:14.154249 systemd[1]: Started sshd@24-188.245.80.168:22-4.153.228.146:43982.service - OpenSSH per-connection server daemon (4.153.228.146:43982). Jan 17 00:05:14.586989 containerd[1486]: time="2026-01-17T00:05:14.586781211Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:05:14.604523 containerd[1486]: time="2026-01-17T00:05:14.604468657Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929\"" Jan 17 00:05:14.606938 containerd[1486]: time="2026-01-17T00:05:14.605133704Z" level=info msg="StartContainer for \"b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929\"" Jan 17 00:05:14.650277 systemd[1]: Started cri-containerd-b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929.scope - libcontainer container b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929. Jan 17 00:05:14.687895 containerd[1486]: time="2026-01-17T00:05:14.687852655Z" level=info msg="StartContainer for \"b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929\" returns successfully" Jan 17 00:05:14.689070 systemd[1]: cri-containerd-b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929.scope: Deactivated successfully. Jan 17 00:05:14.720413 containerd[1486]: time="2026-01-17T00:05:14.720213136Z" level=info msg="shim disconnected" id=b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929 namespace=k8s.io Jan 17 00:05:14.720413 containerd[1486]: time="2026-01-17T00:05:14.720267893Z" level=warning msg="cleaning up after shim disconnected" id=b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929 namespace=k8s.io Jan 17 00:05:14.720413 containerd[1486]: time="2026-01-17T00:05:14.720275653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:14.770515 sshd[4506]: Accepted publickey for core from 4.153.228.146 port 43982 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:14.772714 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:14.777473 systemd-logind[1456]: New session 25 of user core. Jan 17 00:05:14.784240 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:05:15.062700 systemd[1]: run-containerd-runc-k8s.io-b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929-runc.bMdz5Y.mount: Deactivated successfully. Jan 17 00:05:15.062892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a721a94e450f618719c4cfb60a2a2e013c98bfc985082b87100692da158929-rootfs.mount: Deactivated successfully. Jan 17 00:05:15.594875 containerd[1486]: time="2026-01-17T00:05:15.594810861Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:05:15.611882 containerd[1486]: time="2026-01-17T00:05:15.611817375Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b\"" Jan 17 00:05:15.614590 containerd[1486]: time="2026-01-17T00:05:15.612922642Z" level=info msg="StartContainer for \"c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b\"" Jan 17 00:05:15.651322 systemd[1]: Started cri-containerd-c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b.scope - libcontainer container c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b. Jan 17 00:05:15.676794 systemd[1]: cri-containerd-c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b.scope: Deactivated successfully. Jan 17 00:05:15.681240 containerd[1486]: time="2026-01-17T00:05:15.680967336Z" level=info msg="StartContainer for \"c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b\" returns successfully" Jan 17 00:05:15.706138 containerd[1486]: time="2026-01-17T00:05:15.705772199Z" level=info msg="shim disconnected" id=c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b namespace=k8s.io Jan 17 00:05:15.706138 containerd[1486]: time="2026-01-17T00:05:15.705941391Z" level=warning msg="cleaning up after shim disconnected" id=c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b namespace=k8s.io Jan 17 00:05:15.706138 containerd[1486]: time="2026-01-17T00:05:15.705960831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:16.063257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6b8e50afdb501434f06640e229b0d1268409a2f2e50bfb0817155ac6faa031b-rootfs.mount: Deactivated successfully. Jan 17 00:05:16.598845 containerd[1486]: time="2026-01-17T00:05:16.597578049Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:05:16.618673 containerd[1486]: time="2026-01-17T00:05:16.618590294Z" level=info msg="CreateContainer within sandbox \"c1026c91bd9fb431c22835d05fa7f1cf57c99e21835fa2e5037f3294862ab0ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65\"" Jan 17 00:05:16.621228 containerd[1486]: time="2026-01-17T00:05:16.619506212Z" level=info msg="StartContainer for \"19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65\"" Jan 17 00:05:16.657232 systemd[1]: Started cri-containerd-19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65.scope - libcontainer container 19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65. Jan 17 00:05:16.694014 containerd[1486]: time="2026-01-17T00:05:16.693967668Z" level=info msg="StartContainer for \"19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65\" returns successfully" Jan 17 00:05:16.997959 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 00:05:17.619326 kubelet[2566]: I0117 00:05:17.619187 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nxjbt" podStartSLOduration=5.619170143 podStartE2EDuration="5.619170143s" podCreationTimestamp="2026-01-17 00:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:17.618986791 +0000 UTC m=+203.770265248" watchObservedRunningTime="2026-01-17 00:05:17.619170143 +0000 UTC m=+203.770448560" Jan 17 00:05:18.883034 kubelet[2566]: I0117 00:05:18.881953 2566 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-5d990e87a1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:05:18Z","lastTransitionTime":"2026-01-17T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:05:19.986594 systemd-networkd[1372]: lxc_health: Link UP Jan 17 00:05:19.996186 systemd-networkd[1372]: lxc_health: Gained carrier Jan 17 00:05:21.061143 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 17 00:05:21.574584 systemd[1]: run-containerd-runc-k8s.io-19d375cbf4bd8bced3715e1a480fa4a0b8754b102b7f890249741de5dcc18b65-runc.F5ooPQ.mount: Deactivated successfully. Jan 17 00:05:25.970169 kubelet[2566]: E0117 00:05:25.970081 2566 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42400->127.0.0.1:44377: write tcp 127.0.0.1:42400->127.0.0.1:44377: write: broken pipe Jan 17 00:05:26.071155 sshd[4506]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:26.078091 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:05:26.079220 systemd[1]: sshd@24-188.245.80.168:22-4.153.228.146:43982.service: Deactivated successfully. Jan 17 00:05:26.081887 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:05:26.083820 systemd-logind[1456]: Removed session 25. Jan 17 00:05:41.612096 kubelet[2566]: E0117 00:05:41.609813 2566 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41608->10.0.0.2:2379: read: connection timed out" Jan 17 00:05:41.898571 systemd[1]: cri-containerd-9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e.scope: Deactivated successfully. Jan 17 00:05:41.899101 systemd[1]: cri-containerd-9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e.scope: Consumed 4.749s CPU time, 19.9M memory peak, 0B memory swap peak. Jan 17 00:05:41.921880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e-rootfs.mount: Deactivated successfully. Jan 17 00:05:41.928015 containerd[1486]: time="2026-01-17T00:05:41.927918941Z" level=info msg="shim disconnected" id=9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e namespace=k8s.io Jan 17 00:05:41.928015 containerd[1486]: time="2026-01-17T00:05:41.928012460Z" level=warning msg="cleaning up after shim disconnected" id=9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e namespace=k8s.io Jan 17 00:05:41.928015 containerd[1486]: time="2026-01-17T00:05:41.928024060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:42.658723 kubelet[2566]: I0117 00:05:42.658642 2566 scope.go:117] "RemoveContainer" containerID="9d7193ab25ad2019afe2f282c7a055ddfc7fad3ea0f4f29cdc114a9177948c7e" Jan 17 00:05:42.662337 containerd[1486]: time="2026-01-17T00:05:42.662280826Z" level=info msg="CreateContainer within sandbox \"b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:05:42.676794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773757622.mount: Deactivated successfully. Jan 17 00:05:42.678838 containerd[1486]: time="2026-01-17T00:05:42.678793365Z" level=info msg="CreateContainer within sandbox \"b736e39fa274cf56d9af0f1b1372bbf65ef3874636421afee2b2bf2e21028ef4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c\"" Jan 17 00:05:42.679626 containerd[1486]: time="2026-01-17T00:05:42.679377443Z" level=info msg="StartContainer for \"9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c\"" Jan 17 00:05:42.714142 systemd[1]: Started cri-containerd-9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c.scope - libcontainer container 9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c. Jan 17 00:05:42.760920 containerd[1486]: time="2026-01-17T00:05:42.758790911Z" level=info msg="StartContainer for \"9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c\" returns successfully" Jan 17 00:05:42.922584 systemd[1]: run-containerd-runc-k8s.io-9f44eb619e9f4f6f6ec1ab84118d8080c8c43584049cef8b9f5abed9312d3e4c-runc.Yz16Cd.mount: Deactivated successfully. Jan 17 00:05:45.191723 kubelet[2566]: E0117 00:05:45.191325 2566 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41438->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-5d990e87a1.188b5be8f29ed179 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-5d990e87a1,UID:9c20e0248ac1d391be5a2bd34f71a9f8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-5d990e87a1,},FirstTimestamp:2026-01-17 00:05:34.734086521 +0000 UTC m=+220.885364938,LastTimestamp:2026-01-17 00:05:34.734086521 +0000 UTC m=+220.885364938,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-5d990e87a1,}"