Jan 17 00:00:56.898118 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:00:56.898142 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:00:56.898152 kernel: KASLR enabled Jan 17 00:00:56.898158 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:00:56.898164 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 17 00:00:56.898170 kernel: random: crng init done Jan 17 00:00:56.898177 kernel: ACPI: Early table checksum verification disabled Jan 17 00:00:56.898182 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 17 00:00:56.898189 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:00:56.898196 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898202 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898208 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898215 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898221 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898228 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898236 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898243 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898249 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:00:56.898255 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:00:56.898262 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 17 00:00:56.898268 kernel: NUMA: Failed to initialise from firmware Jan 17 00:00:56.898286 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:00:56.898293 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jan 17 00:00:56.898300 kernel: Zone ranges: Jan 17 00:00:56.898306 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 00:00:56.898314 kernel: DMA32 empty Jan 17 00:00:56.898320 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 17 00:00:56.898326 kernel: Movable zone start for each node Jan 17 00:00:56.899385 kernel: Early memory node ranges Jan 17 00:00:56.899394 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 17 00:00:56.899401 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 17 00:00:56.899408 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 17 00:00:56.899414 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 17 00:00:56.899421 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 17 00:00:56.899428 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 17 00:00:56.899434 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 17 00:00:56.899441 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:00:56.899451 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:00:56.899457 kernel: psci: probing for conduit method from ACPI. Jan 17 00:00:56.899464 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:00:56.899474 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:00:56.899481 kernel: psci: Trusted OS migration not required Jan 17 00:00:56.899488 kernel: psci: SMC Calling Convention v1.1 Jan 17 00:00:56.899496 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 00:00:56.899503 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:00:56.899510 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:00:56.899518 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:00:56.899524 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:00:56.899531 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:00:56.899538 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:00:56.899544 kernel: CPU features: detected: Spectre-v4 Jan 17 00:00:56.899551 kernel: CPU features: detected: Spectre-BHB Jan 17 00:00:56.899558 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:00:56.899566 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:00:56.899573 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:00:56.899580 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:00:56.899586 kernel: alternatives: applying boot alternatives Jan 17 00:00:56.899595 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:56.899602 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:00:56.899609 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:00:56.899616 kernel: Fallback order for Node 0: 0 Jan 17 00:00:56.899622 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 17 00:00:56.899629 kernel: Policy zone: Normal Jan 17 00:00:56.899636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:00:56.899644 kernel: software IO TLB: area num 2. Jan 17 00:00:56.899651 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 17 00:00:56.899658 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Jan 17 00:00:56.899665 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:00:56.899672 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:00:56.899679 kernel: rcu: RCU event tracing is enabled. Jan 17 00:00:56.899686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:00:56.899693 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:00:56.899700 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:00:56.899707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:00:56.899714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:00:56.899720 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:00:56.899728 kernel: GICv3: 256 SPIs implemented Jan 17 00:00:56.899735 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:00:56.899742 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:00:56.899749 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 00:00:56.899756 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 00:00:56.899762 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 00:00:56.899769 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 00:00:56.899776 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 00:00:56.899783 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 17 00:00:56.899790 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 17 00:00:56.899797 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:00:56.899805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:56.899812 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:00:56.899819 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:00:56.899826 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:00:56.899833 kernel: Console: colour dummy device 80x25 Jan 17 00:00:56.899840 kernel: ACPI: Core revision 20230628 Jan 17 00:00:56.899847 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:00:56.899854 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:00:56.899861 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:00:56.899868 kernel: landlock: Up and running. Jan 17 00:00:56.899876 kernel: SELinux: Initializing. Jan 17 00:00:56.899884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:56.899891 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:56.899898 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:56.899905 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:56.899912 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:00:56.899919 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:00:56.899926 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 00:00:56.899933 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 00:00:56.899941 kernel: Remapping and enabling EFI services. Jan 17 00:00:56.899948 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:00:56.899956 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:00:56.899963 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 00:00:56.899970 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 17 00:00:56.899977 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:56.899984 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:00:56.899991 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:00:56.899998 kernel: SMP: Total of 2 processors activated. Jan 17 00:00:56.900006 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:00:56.900017 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:00:56.900024 kernel: CPU features: detected: Common not Private translations Jan 17 00:00:56.900036 kernel: CPU features: detected: CRC32 instructions Jan 17 00:00:56.900045 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 00:00:56.900052 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:00:56.900060 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:00:56.900067 kernel: CPU features: detected: Privileged Access Never Jan 17 00:00:56.900075 kernel: CPU features: detected: RAS Extension Support Jan 17 00:00:56.900084 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 00:00:56.900091 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:00:56.900099 kernel: alternatives: applying system-wide alternatives Jan 17 00:00:56.900106 kernel: devtmpfs: initialized Jan 17 00:00:56.900114 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:00:56.900121 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:00:56.900129 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:00:56.900136 kernel: SMBIOS 3.0.0 present. Jan 17 00:00:56.900145 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 17 00:00:56.900153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:00:56.900160 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:00:56.900168 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:00:56.900176 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:00:56.900183 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:00:56.900191 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 17 00:00:56.900198 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:00:56.900205 kernel: cpuidle: using governor menu Jan 17 00:00:56.900214 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:00:56.900222 kernel: ASID allocator initialised with 32768 entries Jan 17 00:00:56.900229 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:00:56.900236 kernel: Serial: AMBA PL011 UART driver Jan 17 00:00:56.900244 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:00:56.900251 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:00:56.900258 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:00:56.900267 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:00:56.900283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:00:56.900294 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:00:56.900302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:00:56.900309 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:00:56.900317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:00:56.900324 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:00:56.901359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:00:56.901371 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:00:56.901379 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:00:56.901387 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:00:56.901399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:00:56.901407 kernel: ACPI: Interpreter enabled Jan 17 00:00:56.901414 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:00:56.901421 kernel: ACPI: MCFG table detected, 1 entries Jan 17 00:00:56.901429 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:00:56.901436 kernel: printk: console [ttyAMA0] enabled Jan 17 00:00:56.901444 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:00:56.901611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:00:56.901691 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:00:56.901759 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:00:56.901825 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 00:00:56.901889 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 00:00:56.901899 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 00:00:56.901906 kernel: PCI host bridge to bus 0000:00 Jan 17 00:00:56.901977 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 00:00:56.902042 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 00:00:56.902102 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 00:00:56.902163 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:00:56.902256 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 00:00:56.903458 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 17 00:00:56.903549 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 17 00:00:56.903619 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:00:56.903702 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.903771 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 17 00:00:56.903848 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.903916 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 17 00:00:56.903989 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.904057 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 17 00:00:56.904140 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.904206 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 17 00:00:56.904293 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.906450 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 17 00:00:56.906550 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.906619 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 17 00:00:56.906701 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.906783 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 17 00:00:56.906859 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.906926 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 17 00:00:56.906999 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:00:56.907065 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 17 00:00:56.907144 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 17 00:00:56.907210 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 17 00:00:56.907400 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:00:56.907496 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 17 00:00:56.907566 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:00:56.907634 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:00:56.907719 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:00:56.907794 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 17 00:00:56.907871 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:00:56.907941 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 17 00:00:56.908011 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 17 00:00:56.908087 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:00:56.908157 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 17 00:00:56.908235 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:00:56.910360 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 17 00:00:56.910492 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 17 00:00:56.910576 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:00:56.910647 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 17 00:00:56.910716 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:00:56.910802 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:00:56.910872 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 17 00:00:56.910941 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 17 00:00:56.911009 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:00:56.911081 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 17 00:00:56.911149 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:00:56.911216 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:00:56.911357 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 17 00:00:56.911435 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 17 00:00:56.911591 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 17 00:00:56.911666 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 00:00:56.911735 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:00:56.911800 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:00:56.911872 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 00:00:56.911940 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 17 00:00:56.912011 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 17 00:00:56.912081 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 00:00:56.912150 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:00:56.912216 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:00:56.912307 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 00:00:56.913450 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:00:56.913533 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:00:56.913613 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:00:56.913680 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:00:56.913745 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:00:56.913816 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:00:56.913882 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:00:56.913946 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:00:56.914018 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:00:56.914084 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:00:56.914153 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:00:56.914222 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 17 00:00:56.917363 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:56.917492 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 17 00:00:56.917562 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:56.917632 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 17 00:00:56.917705 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:56.917775 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 17 00:00:56.917842 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:56.917911 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 17 00:00:56.917977 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:56.918046 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 17 00:00:56.918112 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:56.918185 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 17 00:00:56.918251 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:56.920439 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 17 00:00:56.920525 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:56.920596 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 17 00:00:56.920663 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:56.920734 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 17 00:00:56.920807 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 17 00:00:56.920877 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 17 00:00:56.920945 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:00:56.921014 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 17 00:00:56.921085 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:00:56.921154 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 17 00:00:56.921222 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:00:56.921384 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 17 00:00:56.921476 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 00:00:56.921548 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 17 00:00:56.921615 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 00:00:56.921683 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 17 00:00:56.921755 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 00:00:56.921827 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 17 00:00:56.921893 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 00:00:56.921962 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 17 00:00:56.922031 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 00:00:56.922097 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 17 00:00:56.922163 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 17 00:00:56.922233 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 17 00:00:56.922324 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 17 00:00:56.924498 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:00:56.924570 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 17 00:00:56.924638 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:00:56.924710 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 00:00:56.924775 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 17 00:00:56.924839 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:56.924914 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 17 00:00:56.924984 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:00:56.925050 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 00:00:56.925116 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 17 00:00:56.925181 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:56.925255 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:00:56.925348 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 17 00:00:56.925418 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:00:56.925483 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 00:00:56.925552 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 17 00:00:56.925617 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:56.925691 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:00:56.925760 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:00:56.925825 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 00:00:56.925891 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 17 00:00:56.925956 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:56.926029 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 17 00:00:56.926101 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 17 00:00:56.926167 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:00:56.926233 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 00:00:56.929627 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 17 00:00:56.929767 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:56.929850 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 17 00:00:56.929921 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 17 00:00:56.929991 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:00:56.930064 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 00:00:56.930130 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 17 00:00:56.930196 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:56.930270 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 17 00:00:56.931266 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 17 00:00:56.931419 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 17 00:00:56.931492 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:00:56.931557 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 00:00:56.931628 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 17 00:00:56.931694 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:56.931765 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:00:56.931830 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 00:00:56.931895 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 17 00:00:56.931961 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:56.932029 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:00:56.932095 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 17 00:00:56.932162 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 17 00:00:56.932226 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:56.932312 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 00:00:56.934306 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 00:00:56.934398 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 00:00:56.934473 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 00:00:56.934535 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 17 00:00:56.934602 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:00:56.934672 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 17 00:00:56.934733 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 17 00:00:56.934794 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:00:56.934864 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 17 00:00:56.934925 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 17 00:00:56.934988 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:00:56.935064 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 17 00:00:56.935126 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 17 00:00:56.935202 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:00:56.935271 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 17 00:00:56.937361 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 17 00:00:56.937444 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:00:56.937525 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 17 00:00:56.937587 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 17 00:00:56.937654 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:00:56.937723 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 17 00:00:56.937785 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 17 00:00:56.937846 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:00:56.937912 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 17 00:00:56.937973 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 17 00:00:56.938032 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:00:56.938100 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 17 00:00:56.938163 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 17 00:00:56.938226 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:00:56.938237 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 00:00:56.938245 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 00:00:56.938253 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 00:00:56.938260 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 00:00:56.938268 kernel: iommu: Default domain type: Translated Jan 17 00:00:56.938290 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:00:56.938299 kernel: efivars: Registered efivars operations Jan 17 00:00:56.938307 kernel: vgaarb: loaded Jan 17 00:00:56.938318 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:00:56.938326 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:00:56.938350 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:00:56.938359 kernel: pnp: PnP ACPI init Jan 17 00:00:56.938445 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 00:00:56.938458 kernel: pnp: PnP ACPI: found 1 devices Jan 17 00:00:56.938466 kernel: NET: Registered PF_INET protocol family Jan 17 00:00:56.938474 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:00:56.938488 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:00:56.938496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:00:56.938504 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:00:56.938512 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:00:56.938520 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:00:56.938527 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:56.938535 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:56.938543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:00:56.938619 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 17 00:00:56.938633 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:00:56.938641 kernel: kvm [1]: HYP mode not available Jan 17 00:00:56.938649 kernel: Initialise system trusted keyrings Jan 17 00:00:56.938657 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:00:56.938664 kernel: Key type asymmetric registered Jan 17 00:00:56.938672 kernel: Asymmetric key parser 'x509' registered Jan 17 00:00:56.938680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:00:56.938687 kernel: io scheduler mq-deadline registered Jan 17 00:00:56.938695 kernel: io scheduler kyber registered Jan 17 00:00:56.938704 kernel: io scheduler bfq registered Jan 17 00:00:56.938713 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 00:00:56.938783 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 17 00:00:56.938853 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 17 00:00:56.938922 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.938991 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 17 00:00:56.939057 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 17 00:00:56.939125 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.939195 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 17 00:00:56.939264 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 17 00:00:56.940468 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.940558 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 17 00:00:56.940627 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 17 00:00:56.940700 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.940769 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 17 00:00:56.940835 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 17 00:00:56.940900 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.940969 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 17 00:00:56.941037 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 17 00:00:56.941106 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.941176 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 17 00:00:56.941242 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 17 00:00:56.941615 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.941702 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 17 00:00:56.941772 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 17 00:00:56.941843 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.941854 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 17 00:00:56.941921 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 17 00:00:56.941988 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 17 00:00:56.942054 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:00:56.942064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 00:00:56.942076 kernel: ACPI: button: Power Button [PWRB] Jan 17 00:00:56.942084 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 00:00:56.942159 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 17 00:00:56.942231 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 17 00:00:56.942243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:00:56.942251 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 00:00:56.942446 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 17 00:00:56.942462 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 17 00:00:56.942471 kernel: thunder_xcv, ver 1.0 Jan 17 00:00:56.942483 kernel: thunder_bgx, ver 1.0 Jan 17 00:00:56.942491 kernel: nicpf, ver 1.0 Jan 17 00:00:56.942498 kernel: nicvf, ver 1.0 Jan 17 00:00:56.942595 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:00:56.942662 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:00:56 UTC (1768608056) Jan 17 00:00:56.942673 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:00:56.942681 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 00:00:56.942689 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:00:56.942699 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:00:56.942707 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:00:56.942715 kernel: Segment Routing with IPv6 Jan 17 00:00:56.942723 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:00:56.942731 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:00:56.942738 kernel: Key type dns_resolver registered Jan 17 00:00:56.942746 kernel: registered taskstats version 1 Jan 17 00:00:56.942754 kernel: Loading compiled-in X.509 certificates Jan 17 00:00:56.942762 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:00:56.942772 kernel: Key type .fscrypt registered Jan 17 00:00:56.942780 kernel: Key type fscrypt-provisioning registered Jan 17 00:00:56.942787 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:00:56.942795 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:00:56.942803 kernel: ima: No architecture policies found Jan 17 00:00:56.942810 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:00:56.942818 kernel: clk: Disabling unused clocks Jan 17 00:00:56.942826 kernel: Freeing unused kernel memory: 39424K Jan 17 00:00:56.942834 kernel: Run /init as init process Jan 17 00:00:56.942843 kernel: with arguments: Jan 17 00:00:56.942851 kernel: /init Jan 17 00:00:56.942858 kernel: with environment: Jan 17 00:00:56.942866 kernel: HOME=/ Jan 17 00:00:56.942873 kernel: TERM=linux Jan 17 00:00:56.942883 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:56.942893 systemd[1]: Detected virtualization kvm. Jan 17 00:00:56.942901 systemd[1]: Detected architecture arm64. Jan 17 00:00:56.942911 systemd[1]: Running in initrd. Jan 17 00:00:56.942919 systemd[1]: No hostname configured, using default hostname. Jan 17 00:00:56.942927 systemd[1]: Hostname set to . Jan 17 00:00:56.942935 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:00:56.942943 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:00:56.942952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:56.942961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:56.942970 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:00:56.942980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:56.942988 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:00:56.942997 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:00:56.943007 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:00:56.943015 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:00:56.943024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:56.943032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:56.943042 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:56.943052 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:56.943060 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:56.943069 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:56.943077 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:56.943085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:56.943093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:56.943102 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:56.943112 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:56.943120 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:56.943129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:56.943137 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:56.943145 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:00:56.943154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:56.943162 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:00:56.943171 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:00:56.943179 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:56.943189 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:56.943197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:56.943206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:56.943214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:56.943223 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:00:56.943249 systemd-journald[238]: Collecting audit messages is disabled. Jan 17 00:00:56.943273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:56.943295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:56.943307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:56.943316 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:56.943325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:56.943852 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:00:56.943865 kernel: Bridge firewalling registered Jan 17 00:00:56.943875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:56.943884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:56.943894 systemd-journald[238]: Journal started Jan 17 00:00:56.943920 systemd-journald[238]: Runtime Journal (/run/log/journal/51ad6bff309c4826ab5196ae9e8eede2) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:00:56.908401 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 00:00:56.939366 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 00:00:56.946008 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:56.952750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:56.956519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:56.957532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:56.963418 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:56.971614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:00:56.984629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:56.993167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:57.002211 dracut-cmdline[271]: dracut-dracut-053 Jan 17 00:00:57.004666 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:57.030051 systemd-resolved[275]: Positive Trust Anchors: Jan 17 00:00:57.030069 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:57.030103 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:57.036645 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 17 00:00:57.037729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:57.039415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:57.099390 kernel: SCSI subsystem initialized Jan 17 00:00:57.103364 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:00:57.111390 kernel: iscsi: registered transport (tcp) Jan 17 00:00:57.124637 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:00:57.124730 kernel: QLogic iSCSI HBA Driver Jan 17 00:00:57.173413 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:57.178570 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:00:57.201356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:00:57.201420 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:00:57.201432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:00:57.255392 kernel: raid6: neonx8 gen() 15626 MB/s Jan 17 00:00:57.272383 kernel: raid6: neonx4 gen() 15243 MB/s Jan 17 00:00:57.289396 kernel: raid6: neonx2 gen() 13142 MB/s Jan 17 00:00:57.306390 kernel: raid6: neonx1 gen() 10422 MB/s Jan 17 00:00:57.323432 kernel: raid6: int64x8 gen() 6911 MB/s Jan 17 00:00:57.340392 kernel: raid6: int64x4 gen() 7308 MB/s Jan 17 00:00:57.357414 kernel: raid6: int64x2 gen() 6035 MB/s Jan 17 00:00:57.375373 kernel: raid6: int64x1 gen() 4957 MB/s Jan 17 00:00:57.375441 kernel: raid6: using algorithm neonx8 gen() 15626 MB/s Jan 17 00:00:57.391398 kernel: raid6: .... xor() 11464 MB/s, rmw enabled Jan 17 00:00:57.391472 kernel: raid6: using neon recovery algorithm Jan 17 00:00:57.396593 kernel: xor: measuring software checksum speed Jan 17 00:00:57.396648 kernel: 8regs : 19750 MB/sec Jan 17 00:00:57.396668 kernel: 32regs : 19660 MB/sec Jan 17 00:00:57.397363 kernel: arm64_neon : 27070 MB/sec Jan 17 00:00:57.397394 kernel: xor: using function: arm64_neon (27070 MB/sec) Jan 17 00:00:57.449381 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:00:57.463117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:57.469731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:57.484978 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 17 00:00:57.488459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:57.497536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:00:57.524178 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 17 00:00:57.559974 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:57.566536 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:57.631747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:57.638554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:00:57.659697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:57.662586 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:57.663424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:57.664108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:57.673993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:00:57.688523 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:57.732405 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:00:57.745376 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:00:57.746352 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:00:57.757180 kernel: ACPI: bus type USB registered Jan 17 00:00:57.757256 kernel: usbcore: registered new interface driver usbfs Jan 17 00:00:57.757772 kernel: usbcore: registered new interface driver hub Jan 17 00:00:57.759339 kernel: usbcore: registered new device driver usb Jan 17 00:00:57.758049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:57.758406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:57.762099 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:57.766392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:57.766581 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:57.769037 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:57.784991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:57.798692 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 17 00:00:57.805384 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 17 00:00:57.805607 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:00:57.810384 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:00:57.813467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:57.816456 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:00:57.816642 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:00:57.816728 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:00:57.816809 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:00:57.816889 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:00:57.819502 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:00:57.823093 kernel: hub 1-0:1.0: USB hub found Jan 17 00:00:57.823422 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:00:57.823516 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:00:57.823648 kernel: hub 2-0:1.0: USB hub found Jan 17 00:00:57.823739 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:00:57.830607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:57.844368 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 17 00:00:57.844575 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 17 00:00:57.845374 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 17 00:00:57.845523 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 17 00:00:57.845611 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:00:57.851578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:57.854926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:00:57.854957 kernel: GPT:17805311 != 80003071 Jan 17 00:00:57.854967 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:00:57.854977 kernel: GPT:17805311 != 80003071 Jan 17 00:00:57.854986 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:00:57.854995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:57.855004 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 17 00:00:57.893626 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (525) Jan 17 00:00:57.902368 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (518) Jan 17 00:00:57.908363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:00:57.921839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:00:57.929078 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:00:57.938518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:00:57.940053 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:00:57.945522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:00:57.966895 disk-uuid[571]: Primary Header is updated. Jan 17 00:00:57.966895 disk-uuid[571]: Secondary Entries is updated. Jan 17 00:00:57.966895 disk-uuid[571]: Secondary Header is updated. Jan 17 00:00:57.974409 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:58.065361 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:00:58.202736 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 17 00:00:58.202801 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:00:58.203081 kernel: usbcore: registered new interface driver usbhid Jan 17 00:00:58.203100 kernel: usbhid: USB HID core driver Jan 17 00:00:58.307507 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 17 00:00:58.439380 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 17 00:00:58.493386 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 17 00:00:58.988991 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:58.990628 disk-uuid[572]: The operation has completed successfully. Jan 17 00:00:59.044591 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:00:59.044709 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:00:59.061719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:00:59.067824 sh[590]: Success Jan 17 00:00:59.085363 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:00:59.145927 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:00:59.149474 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:00:59.150160 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:00:59.169004 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:00:59.169102 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:59.169133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:00:59.169183 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:00:59.169468 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:00:59.176374 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:00:59.178025 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:00:59.180427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:00:59.189568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:00:59.192512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:00:59.206900 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:59.206955 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:59.206967 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:59.212355 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:00:59.212415 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:59.224067 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:00:59.226377 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:59.231739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:00:59.238583 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:00:59.330448 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:59.337515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:59.342603 ignition[685]: Ignition 2.19.0 Jan 17 00:00:59.343226 ignition[685]: Stage: fetch-offline Jan 17 00:00:59.343720 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:59.343730 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:59.343915 ignition[685]: parsed url from cmdline: "" Jan 17 00:00:59.343918 ignition[685]: no config URL provided Jan 17 00:00:59.343923 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:59.343930 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:59.347403 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:59.343936 ignition[685]: failed to fetch config: resource requires networking Jan 17 00:00:59.344150 ignition[685]: Ignition finished successfully Jan 17 00:00:59.366211 systemd-networkd[778]: lo: Link UP Jan 17 00:00:59.366229 systemd-networkd[778]: lo: Gained carrier Jan 17 00:00:59.368509 systemd-networkd[778]: Enumeration completed Jan 17 00:00:59.368877 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:59.370114 systemd[1]: Reached target network.target - Network. Jan 17 00:00:59.370707 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:59.370711 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:59.372801 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:59.372809 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:59.375844 systemd-networkd[778]: eth0: Link UP Jan 17 00:00:59.375856 systemd-networkd[778]: eth0: Gained carrier Jan 17 00:00:59.375874 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:59.379914 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:00:59.381660 systemd-networkd[778]: eth1: Link UP Jan 17 00:00:59.381663 systemd-networkd[778]: eth1: Gained carrier Jan 17 00:00:59.381674 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:59.396983 ignition[781]: Ignition 2.19.0 Jan 17 00:00:59.396999 ignition[781]: Stage: fetch Jan 17 00:00:59.397321 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:59.397379 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:59.397532 ignition[781]: parsed url from cmdline: "" Jan 17 00:00:59.397538 ignition[781]: no config URL provided Jan 17 00:00:59.397547 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:59.397561 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:59.397594 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:00:59.398542 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:00:59.419462 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:00:59.442464 systemd-networkd[778]: eth0: DHCPv4 address 46.224.97.13/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:00:59.599157 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:00:59.604870 ignition[781]: GET result: OK Jan 17 00:00:59.605071 ignition[781]: parsing config with SHA512: fe4f2d09e591869212ee5e15451840e473bcb3e877dee683ebd1f634b39078927e7f83ce1e6ada0ea487609b75c5c1be346aef5bb3971060280ac31f5e5b3aed Jan 17 00:00:59.612951 unknown[781]: fetched base config from "system" Jan 17 00:00:59.613743 ignition[781]: fetch: fetch complete Jan 17 00:00:59.612966 unknown[781]: fetched base config from "system" Jan 17 00:00:59.613748 ignition[781]: fetch: fetch passed Jan 17 00:00:59.612975 unknown[781]: fetched user config from "hetzner" Jan 17 00:00:59.613800 ignition[781]: Ignition finished successfully Jan 17 00:00:59.617499 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:00:59.627670 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:00:59.643885 ignition[789]: Ignition 2.19.0 Jan 17 00:00:59.643896 ignition[789]: Stage: kargs Jan 17 00:00:59.644099 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:59.644111 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:59.647636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:00:59.645248 ignition[789]: kargs: kargs passed Jan 17 00:00:59.645364 ignition[789]: Ignition finished successfully Jan 17 00:00:59.664256 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:00:59.683058 ignition[795]: Ignition 2.19.0 Jan 17 00:00:59.684164 ignition[795]: Stage: disks Jan 17 00:00:59.684734 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:59.684761 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:00:59.687158 ignition[795]: disks: disks passed Jan 17 00:00:59.687253 ignition[795]: Ignition finished successfully Jan 17 00:00:59.689416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:00:59.690943 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:59.691891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:59.693083 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:59.694217 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:59.695427 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:59.702561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:00:59.720794 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:00:59.725972 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:00:59.734560 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:00:59.787363 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:00:59.788240 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:00:59.790063 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:00:59.798584 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:59.802969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:00:59.805896 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:00:59.808705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:00:59.816380 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Jan 17 00:00:59.808745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:59.818421 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:59.818442 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:59.819383 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:59.822946 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:00:59.823002 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:59.827087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:59.829638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:00:59.840565 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:00:59.884206 coreos-metadata[813]: Jan 17 00:00:59.884 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:00:59.886785 coreos-metadata[813]: Jan 17 00:00:59.886 INFO Fetch successful Jan 17 00:00:59.889037 coreos-metadata[813]: Jan 17 00:00:59.887 INFO wrote hostname ci-4081-3-6-n-ce65c18e74 to /sysroot/etc/hostname Jan 17 00:00:59.891323 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:00:59.891449 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:59.898756 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:00:59.904241 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:00:59.909350 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:01:00.017455 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:00.026520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:01:00.029575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:01:00.042405 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:00.063135 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:01:00.068360 ignition[930]: INFO : Ignition 2.19.0 Jan 17 00:01:00.068360 ignition[930]: INFO : Stage: mount Jan 17 00:01:00.068360 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:00.068360 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:00.070951 ignition[930]: INFO : mount: mount passed Jan 17 00:01:00.071437 ignition[930]: INFO : Ignition finished successfully Jan 17 00:01:00.073741 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:01:00.080567 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:01:00.170000 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:01:00.177846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:00.193034 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Jan 17 00:01:00.193096 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:00.193117 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:00.193584 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:01:00.197375 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:01:00.197444 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:01:00.200902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:00.221947 ignition[958]: INFO : Ignition 2.19.0 Jan 17 00:01:00.221947 ignition[958]: INFO : Stage: files Jan 17 00:01:00.223206 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:00.223206 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:00.223206 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:01:00.226204 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:01:00.226204 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:01:00.229001 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:01:00.229001 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:01:00.229001 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:01:00.228882 unknown[958]: wrote ssh authorized keys file for user: core Jan 17 00:01:00.234398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:01:00.234398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:01:00.234398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:01:00.234398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:01:00.294139 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:01:00.400382 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:01:00.400382 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:01:00.400382 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:01:00.642547 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:01:00.824368 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:01:00.824368 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:00.827317 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:01:01.041695 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:01:01.147750 systemd-networkd[778]: eth0: Gained IPv6LL Jan 17 00:01:01.148209 systemd-networkd[778]: eth1: Gained IPv6LL Jan 17 00:01:01.462132 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:01.462132 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:01.465505 ignition[958]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:01.465505 ignition[958]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:01.465505 ignition[958]: INFO : files: files passed Jan 17 00:01:01.465505 ignition[958]: INFO : Ignition finished successfully Jan 17 00:01:01.467048 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:01:01.479091 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:01:01.483023 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:01:01.487573 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:01:01.487689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:01:01.503428 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:01.503428 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:01.506712 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:01.508739 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:01.510784 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:01:01.516684 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:01:01.549396 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:01:01.549519 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:01:01.550967 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:01:01.551897 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:01:01.553287 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:01:01.555530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:01:01.573078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:01.578561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:01:01.592992 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:01.593822 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:01.594599 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:01:01.595799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:01:01.595934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:01.598201 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:01:01.598949 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:01:01.600902 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:01:01.602652 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:01.603807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:01.604886 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:01:01.605967 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:01.607151 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:01:01.608239 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:01:01.609301 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:01:01.610375 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:01:01.610510 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:01.612027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:01.613069 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:01.614105 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:01:01.614187 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:01.615260 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:01:01.615412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:01.617075 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:01:01.617195 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:01.618448 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:01:01.618549 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:01:01.619764 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:01:01.619863 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:01:01.629655 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:01:01.635587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:01:01.636139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:01:01.636291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:01.639693 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:01:01.639808 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:01.647635 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:01:01.647771 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:01:01.659043 ignition[1010]: INFO : Ignition 2.19.0 Jan 17 00:01:01.659043 ignition[1010]: INFO : Stage: umount Jan 17 00:01:01.661711 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:01.661711 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:01.664872 ignition[1010]: INFO : umount: umount passed Jan 17 00:01:01.664872 ignition[1010]: INFO : Ignition finished successfully Jan 17 00:01:01.666590 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:01:01.668238 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:01:01.668447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:01:01.669749 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:01:01.669863 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:01:01.670824 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:01:01.670874 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:01:01.671570 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:01:01.671609 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:01:01.672883 systemd[1]: Stopped target network.target - Network. Jan 17 00:01:01.673843 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:01:01.673911 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:01.675150 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:01:01.676266 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:01:01.679538 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:01.680224 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:01:01.681150 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:01:01.682297 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:01:01.682364 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:01.683737 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:01:01.683779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:01.684750 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:01:01.684804 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:01:01.685723 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:01:01.685766 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:01.686902 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:01:01.687888 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:01:01.688928 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:01:01.689091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:01:01.691644 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:01:01.691741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:01.696379 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 17 00:01:01.698045 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:01:01.698197 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:01:01.700933 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 17 00:01:01.701224 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:01:01.701610 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:01.703019 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:01:01.703161 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:01:01.704963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:01:01.705031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:01.711476 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:01:01.712138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:01:01.712214 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:01.714484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:01:01.714546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:01.715522 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:01:01.715565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:01.717089 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:01.730468 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:01:01.730592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:01:01.744713 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:01:01.745023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:01.748690 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:01:01.748774 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:01.750721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:01:01.750756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:01.751424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:01:01.751477 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:01.753025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:01:01.753070 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:01.754553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:01.754604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:01.759617 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:01:01.760217 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:01:01.760311 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:01.762515 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:01:01.762574 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:01.763242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:01:01.763301 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:01.766938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:01.766998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:01.770548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:01:01.770732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:01:01.771812 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:01:01.775610 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:01:01.789093 systemd[1]: Switching root. Jan 17 00:01:01.826265 systemd-journald[238]: Journal stopped Jan 17 00:01:02.888193 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 17 00:01:02.888270 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:01:02.888284 kernel: SELinux: policy capability open_perms=1 Jan 17 00:01:02.888298 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:01:02.888309 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:01:02.888322 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:01:02.888358 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:01:02.888371 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:01:02.888380 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:01:02.888390 kernel: audit: type=1403 audit(1768608062.053:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:01:02.888401 systemd[1]: Successfully loaded SELinux policy in 34.885ms. Jan 17 00:01:02.888423 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.292ms. Jan 17 00:01:02.888435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:02.888446 systemd[1]: Detected virtualization kvm. Jan 17 00:01:02.888458 systemd[1]: Detected architecture arm64. Jan 17 00:01:02.888468 systemd[1]: Detected first boot. Jan 17 00:01:02.888479 systemd[1]: Hostname set to . Jan 17 00:01:02.888489 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:02.888500 zram_generator::config[1071]: No configuration found. Jan 17 00:01:02.888515 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:01:02.888525 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:01:02.888538 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:01:02.888551 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:01:02.888561 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:01:02.888571 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:01:02.888582 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:01:02.888593 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:01:02.888604 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:01:02.888614 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:01:02.888624 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:01:02.888636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:02.888647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:02.888657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:01:02.888668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:01:02.888678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:01:02.888689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:02.888699 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:01:02.888710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:02.888720 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:01:02.888732 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:02.888747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:02.888758 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:02.888775 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:02.888786 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:01:02.888796 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:01:02.888806 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:01:02.888818 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:01:02.888829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:02.888839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:02.888850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:02.888860 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:01:02.888871 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:01:02.888881 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:01:02.888892 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:01:02.888902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:01:02.888917 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:01:02.888930 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:01:02.888941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:01:02.888951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:02.888962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:02.888977 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:01:02.888992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:02.889003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:02.889016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:02.889026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:01:02.889037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:02.889049 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:01:02.889060 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:01:02.889072 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:01:02.889084 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:02.889095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:02.889106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:01:02.889139 systemd-journald[1158]: Collecting audit messages is disabled. Jan 17 00:01:02.889168 kernel: ACPI: bus type drm_connector registered Jan 17 00:01:02.889179 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:01:02.889190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:02.889202 kernel: fuse: init (API version 7.39) Jan 17 00:01:02.889212 systemd-journald[1158]: Journal started Jan 17 00:01:02.889233 systemd-journald[1158]: Runtime Journal (/run/log/journal/51ad6bff309c4826ab5196ae9e8eede2) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:01:02.891103 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:02.897361 kernel: loop: module loaded Jan 17 00:01:02.901454 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:01:02.904598 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:01:02.905625 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:01:02.907082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:01:02.911509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:01:02.912366 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:01:02.913667 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:01:02.914905 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:02.915964 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:01:02.916119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:01:02.917102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:02.917304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:02.918261 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:02.918669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:02.919536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:02.919679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:02.920633 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:01:02.920776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:01:02.921898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:02.923738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:02.926825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:02.928312 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:01:02.930023 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:01:02.942462 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:01:02.950452 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:01:02.955137 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:01:02.958624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:01:02.969860 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:01:02.976563 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:01:02.979562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:02.990537 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:01:02.991755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:02.996541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:03.001756 systemd-journald[1158]: Time spent on flushing to /var/log/journal/51ad6bff309c4826ab5196ae9e8eede2 is 33.538ms for 1113 entries. Jan 17 00:01:03.001756 systemd-journald[1158]: System Journal (/var/log/journal/51ad6bff309c4826ab5196ae9e8eede2) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:01:03.045545 systemd-journald[1158]: Received client request to flush runtime journal. Jan 17 00:01:03.006586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:03.011880 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:03.015096 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:01:03.017536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:01:03.027669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:01:03.046879 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:01:03.051298 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:01:03.058174 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:01:03.071825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:03.072835 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:01:03.081156 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 17 00:01:03.081178 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 17 00:01:03.085700 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:03.094554 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:01:03.125518 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:01:03.133574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:03.151083 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 17 00:01:03.151103 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 17 00:01:03.155713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:03.502535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:01:03.510581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:03.548576 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Jan 17 00:01:03.571319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:03.587833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:03.600500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:01:03.647054 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 17 00:01:03.677694 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:01:03.741719 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:01:03.764823 systemd-networkd[1244]: lo: Link UP Jan 17 00:01:03.764835 systemd-networkd[1244]: lo: Gained carrier Jan 17 00:01:03.768685 systemd-networkd[1244]: Enumeration completed Jan 17 00:01:03.768824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:03.770124 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:03.770135 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:03.772076 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:03.772088 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:03.773975 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:03.774031 systemd-networkd[1244]: eth0: Link UP Jan 17 00:01:03.774037 systemd-networkd[1244]: eth0: Gained carrier Jan 17 00:01:03.774052 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:03.774598 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:01:03.780840 systemd-networkd[1244]: eth1: Link UP Jan 17 00:01:03.780852 systemd-networkd[1244]: eth1: Gained carrier Jan 17 00:01:03.780871 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:03.802440 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1245) Jan 17 00:01:03.837965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:03.862882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:03.864923 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:01:03.869504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:03.874637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:03.877461 systemd-networkd[1244]: eth0: DHCPv4 address 46.224.97.13/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:01:03.881139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:01:03.881182 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:01:03.881606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:03.881763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:03.890699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:03.890867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:03.899016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:03.899233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:03.908646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:01:03.910107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:03.910200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:03.911621 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 17 00:01:03.911668 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:01:03.911707 kernel: [drm] features: -context_init Jan 17 00:01:03.912621 kernel: [drm] number of scanouts: 1 Jan 17 00:01:03.912668 kernel: [drm] number of cap sets: 0 Jan 17 00:01:03.916369 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:01:03.919625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:03.921978 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:01:03.929021 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:01:03.939866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:03.940206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:03.943523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:04.016818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:04.025130 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:01:04.036695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:01:04.051350 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:04.082266 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:01:04.085689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:04.093646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:01:04.098937 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:04.126604 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:01:04.129146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:01:04.131753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:01:04.131877 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:04.132550 systemd[1]: Reached target machines.target - Containers. Jan 17 00:01:04.134353 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:01:04.141550 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:01:04.144820 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:01:04.147414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:04.149532 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:01:04.156552 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:01:04.159507 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:01:04.160912 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:01:04.180660 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:01:04.183359 kernel: loop0: detected capacity change from 0 to 8 Jan 17 00:01:04.191468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:01:04.206804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:01:04.210495 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:01:04.212818 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 00:01:04.243540 kernel: loop2: detected capacity change from 0 to 114432 Jan 17 00:01:04.271701 kernel: loop3: detected capacity change from 0 to 207008 Jan 17 00:01:04.305369 kernel: loop4: detected capacity change from 0 to 8 Jan 17 00:01:04.309421 kernel: loop5: detected capacity change from 0 to 114328 Jan 17 00:01:04.324615 kernel: loop6: detected capacity change from 0 to 114432 Jan 17 00:01:04.336377 kernel: loop7: detected capacity change from 0 to 207008 Jan 17 00:01:04.348537 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:01:04.349148 (sd-merge)[1329]: Merged extensions into '/usr'. Jan 17 00:01:04.355355 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:01:04.355697 systemd[1]: Reloading... Jan 17 00:01:04.468265 zram_generator::config[1358]: No configuration found. Jan 17 00:01:04.594403 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:01:04.605310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:04.666413 systemd[1]: Reloading finished in 308 ms. Jan 17 00:01:04.681565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:01:04.685383 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:01:04.698711 systemd[1]: Starting ensure-sysext.service... Jan 17 00:01:04.705585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:04.709879 systemd[1]: Reloading requested from client PID 1401 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:01:04.709896 systemd[1]: Reloading... Jan 17 00:01:04.734441 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:01:04.735064 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:01:04.735948 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:01:04.738619 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 17 00:01:04.738788 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 17 00:01:04.741628 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:04.741739 systemd-tmpfiles[1403]: Skipping /boot Jan 17 00:01:04.750151 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:04.750955 systemd-tmpfiles[1403]: Skipping /boot Jan 17 00:01:04.780352 zram_generator::config[1428]: No configuration found. Jan 17 00:01:04.899068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:04.960721 systemd[1]: Reloading finished in 250 ms. Jan 17 00:01:04.972884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:04.985538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:04.992648 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:01:04.997449 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:01:05.011581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:05.016993 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:01:05.028707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:05.035804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:05.042044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:05.052665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:05.053420 systemd-networkd[1244]: eth0: Gained IPv6LL Jan 17 00:01:05.059501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:05.065619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:01:05.067965 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:01:05.073790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:05.073970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:05.077429 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:05.077605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:05.086828 augenrules[1504]: No rules Jan 17 00:01:05.092729 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:05.095100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:05.099040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:05.117844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:01:05.124134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:05.133608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:05.135976 systemd-resolved[1486]: Positive Trust Anchors: Jan 17 00:01:05.136272 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:05.137966 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:05.138770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:05.146203 systemd-resolved[1486]: Using system hostname 'ci-4081-3-6-n-ce65c18e74'. Jan 17 00:01:05.148623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:05.152641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:05.153630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:05.165657 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:01:05.169183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:05.172947 systemd[1]: Finished ensure-sysext.service. Jan 17 00:01:05.173753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:01:05.174713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:05.174855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:05.175764 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:05.175892 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:05.176734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:05.176874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:05.177786 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:05.178874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:05.184642 systemd-networkd[1244]: eth1: Gained IPv6LL Jan 17 00:01:05.186544 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:01:05.189972 systemd[1]: Reached target network.target - Network. Jan 17 00:01:05.190582 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:01:05.191145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:05.192287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:05.192380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:05.203780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:01:05.205456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:01:05.249477 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:01:05.251312 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:05.252371 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:01:05.253500 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:01:05.254545 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:01:05.255566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:01:05.255705 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:05.256422 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:01:05.257422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:01:05.258359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:01:05.259153 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:05.261429 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:01:05.263749 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:01:05.265676 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:01:05.270960 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:01:05.271946 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:05.272761 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:05.273681 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:01:05.273809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:05.273899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:05.276520 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:01:05.286587 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:01:05.291649 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:01:05.297494 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:01:05.301316 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:01:05.301979 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:01:05.307680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:05.311353 jq[1550]: false Jan 17 00:01:05.317433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:01:05.331504 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:01:05.343295 extend-filesystems[1551]: Found loop4 Jan 17 00:01:05.342029 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:01:05.348353 coreos-metadata[1545]: Jan 17 00:01:05.346 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:01:05.348353 coreos-metadata[1545]: Jan 17 00:01:05.346 INFO Fetch successful Jan 17 00:01:05.348353 coreos-metadata[1545]: Jan 17 00:01:05.346 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:01:05.348353 coreos-metadata[1545]: Jan 17 00:01:05.346 INFO Fetch successful Jan 17 00:01:05.346744 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:01:05.352191 extend-filesystems[1551]: Found loop5 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found loop6 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found loop7 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda1 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda2 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda3 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found usr Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda4 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda6 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda7 Jan 17 00:01:05.352191 extend-filesystems[1551]: Found sda9 Jan 17 00:01:05.352191 extend-filesystems[1551]: Checking size of /dev/sda9 Jan 17 00:01:05.372136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:01:04.925570 systemd-timesyncd[1540]: Contacted time server 162.55.190.98:123 (0.flatcar.pool.ntp.org). Jan 17 00:01:04.939545 systemd-journald[1158]: Time jumped backwards, rotating. Jan 17 00:01:04.927207 systemd-timesyncd[1540]: Initial clock synchronization to Sat 2026-01-17 00:01:04.925454 UTC. Jan 17 00:01:04.929887 systemd-resolved[1486]: Clock change detected. Flushing caches. Jan 17 00:01:04.935893 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:01:04.944946 dbus-daemon[1546]: [system] SELinux support is enabled Jan 17 00:01:04.962319 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:01:04.964500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:01:04.973189 extend-filesystems[1551]: Resized partition /dev/sda9 Jan 17 00:01:04.977846 extend-filesystems[1588]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:01:04.984253 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 17 00:01:04.978533 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:01:04.988945 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:01:04.993768 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:01:05.016402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:01:05.016638 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:01:05.022576 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:01:05.022837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:01:05.028449 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:01:05.040395 jq[1591]: true Jan 17 00:01:05.041433 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:01:05.041661 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:01:05.072666 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:01:05.078719 tar[1597]: linux-arm64/LICENSE Jan 17 00:01:05.079856 tar[1597]: linux-arm64/helm Jan 17 00:01:05.080593 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:01:05.081544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:01:05.083234 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:01:05.083257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:01:05.104172 update_engine[1587]: I20260117 00:01:05.100955 1587 main.cc:92] Flatcar Update Engine starting Jan 17 00:01:05.109438 jq[1602]: true Jan 17 00:01:05.124453 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:01:05.127640 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:01:05.131453 update_engine[1587]: I20260117 00:01:05.131155 1587 update_check_scheduler.cc:74] Next update check in 11m40s Jan 17 00:01:05.163165 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1237) Jan 17 00:01:05.166515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:01:05.204161 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 17 00:01:05.204698 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:01:05.205360 systemd-logind[1582]: New seat seat0. Jan 17 00:01:05.207510 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:01:05.216797 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:01:05.216825 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 17 00:01:05.217583 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:01:05.222195 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:01:05.222195 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 17 00:01:05.222195 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 17 00:01:05.233251 extend-filesystems[1551]: Resized filesystem in /dev/sda9 Jan 17 00:01:05.233251 extend-filesystems[1551]: Found sr0 Jan 17 00:01:05.228579 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:01:05.228829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:01:05.251958 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:05.253902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:01:05.270841 systemd[1]: Starting sshkeys.service... Jan 17 00:01:05.300279 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:01:05.312526 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:01:05.398133 coreos-metadata[1652]: Jan 17 00:01:05.397 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:01:05.402101 coreos-metadata[1652]: Jan 17 00:01:05.402 INFO Fetch successful Jan 17 00:01:05.407471 unknown[1652]: wrote ssh authorized keys file for user: core Jan 17 00:01:05.410137 containerd[1598]: time="2026-01-17T00:01:05.408505722Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:01:05.450568 update-ssh-keys[1659]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:05.449415 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:01:05.456785 systemd[1]: Finished sshkeys.service. Jan 17 00:01:05.463799 containerd[1598]: time="2026-01-17T00:01:05.463744482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.465459 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.465965682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466025962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466045682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466236442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466254722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466320922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466333642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466546922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466565642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466579802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:05.466875 containerd[1598]: time="2026-01-17T00:01:05.466589802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.467210 containerd[1598]: time="2026-01-17T00:01:05.466659882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.467210 containerd[1598]: time="2026-01-17T00:01:05.466840642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:05.467423 containerd[1598]: time="2026-01-17T00:01:05.467398122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:05.467477 containerd[1598]: time="2026-01-17T00:01:05.467465042Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:01:05.467613 containerd[1598]: time="2026-01-17T00:01:05.467594922Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:01:05.467725 containerd[1598]: time="2026-01-17T00:01:05.467708322Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:01:05.472360 containerd[1598]: time="2026-01-17T00:01:05.472324282Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:01:05.472633 containerd[1598]: time="2026-01-17T00:01:05.472616522Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:01:05.472809 containerd[1598]: time="2026-01-17T00:01:05.472792682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:01:05.473344 containerd[1598]: time="2026-01-17T00:01:05.473066802Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:01:05.473344 containerd[1598]: time="2026-01-17T00:01:05.473095642Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:01:05.473344 containerd[1598]: time="2026-01-17T00:01:05.473269002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474302802Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474448602Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474465002Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474480002Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474493442Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474506242Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474518682Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474533122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474547442Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474559762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474571802Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474586122Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474607042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475261 containerd[1598]: time="2026-01-17T00:01:05.474621722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474639562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474653202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474666842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474680322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474692242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474704962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474718082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474731842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474743642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474755642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474775802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474792362Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474813122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474825282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475508 containerd[1598]: time="2026-01-17T00:01:05.474836362Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.474944762Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.474964002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.474975882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.474989122Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.474999122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.475028482Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.475038682Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:01:05.475732 containerd[1598]: time="2026-01-17T00:01:05.475049002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:01:05.479148 containerd[1598]: time="2026-01-17T00:01:05.477297002Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:01:05.479148 containerd[1598]: time="2026-01-17T00:01:05.477398082Z" level=info msg="Connect containerd service" Jan 17 00:01:05.479148 containerd[1598]: time="2026-01-17T00:01:05.477515202Z" level=info msg="using legacy CRI server" Jan 17 00:01:05.479148 containerd[1598]: time="2026-01-17T00:01:05.477524322Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:01:05.479148 containerd[1598]: time="2026-01-17T00:01:05.477760442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:01:05.484455 containerd[1598]: time="2026-01-17T00:01:05.484408962Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:01:05.489823 containerd[1598]: time="2026-01-17T00:01:05.486776962Z" level=info msg="Start subscribing containerd event" Jan 17 00:01:05.490068 containerd[1598]: time="2026-01-17T00:01:05.490002162Z" level=info msg="Start recovering state" Jan 17 00:01:05.490414 containerd[1598]: time="2026-01-17T00:01:05.490398882Z" level=info msg="Start event monitor" Jan 17 00:01:05.490495 containerd[1598]: time="2026-01-17T00:01:05.490482402Z" level=info msg="Start snapshots syncer" Jan 17 00:01:05.490596 containerd[1598]: time="2026-01-17T00:01:05.490581682Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:01:05.490645 containerd[1598]: time="2026-01-17T00:01:05.490634482Z" level=info msg="Start streaming server" Jan 17 00:01:05.491053 containerd[1598]: time="2026-01-17T00:01:05.487356282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:01:05.492603 containerd[1598]: time="2026-01-17T00:01:05.491231162Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:01:05.491411 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:01:05.492768 containerd[1598]: time="2026-01-17T00:01:05.492750122Z" level=info msg="containerd successfully booted in 0.087497s" Jan 17 00:01:05.985889 tar[1597]: linux-arm64/README.md Jan 17 00:01:06.001220 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:01:06.221528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:06.234797 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:06.369732 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:01:06.721685 kubelet[1685]: E0117 00:01:06.721584 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:06.724757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:06.724964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:07.498973 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:01:07.525602 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:01:07.532436 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:01:07.536886 systemd[1]: Started sshd@0-46.224.97.13:22-175.206.113.91:47146.service - OpenSSH per-connection server daemon (175.206.113.91:47146). Jan 17 00:01:07.544453 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:01:07.544847 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:01:07.551602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:01:07.566953 sshd[1703]: Connection closed by 175.206.113.91 port 47146 Jan 17 00:01:07.568580 systemd[1]: sshd@0-46.224.97.13:22-175.206.113.91:47146.service: Deactivated successfully. Jan 17 00:01:07.576826 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:01:07.582724 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:01:07.601097 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:01:07.604343 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:01:07.606248 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:01:07.607462 systemd[1]: Startup finished in 6.149s (kernel) + 6.040s (userspace) = 12.190s. Jan 17 00:01:16.848238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:16.855445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:16.977171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:16.981591 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:17.034844 kubelet[1734]: E0117 00:01:17.034738 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:17.039302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:17.039661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:27.097586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:01:27.104446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:27.233362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:27.246791 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:27.291948 kubelet[1754]: E0117 00:01:27.291867 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:27.295342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:27.295524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:29.715595 systemd[1]: Started sshd@1-46.224.97.13:22-37.153.194.108:54126.service - OpenSSH per-connection server daemon (37.153.194.108:54126). Jan 17 00:01:33.896535 systemd[1]: Started sshd@2-46.224.97.13:22-112.194.142.167:38990.service - OpenSSH per-connection server daemon (112.194.142.167:38990). Jan 17 00:01:34.449192 sshd[1762]: Connection closed by 112.194.142.167 port 38990 Jan 17 00:01:34.450246 systemd[1]: sshd@2-46.224.97.13:22-112.194.142.167:38990.service: Deactivated successfully. Jan 17 00:01:37.347960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:01:37.358460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:37.516353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:37.526758 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:37.576405 kubelet[1778]: E0117 00:01:37.576329 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:37.581460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:37.581805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:40.535962 systemd[1]: Started sshd@3-46.224.97.13:22-4.153.228.146:58016.service - OpenSSH per-connection server daemon (4.153.228.146:58016). Jan 17 00:01:41.159203 sshd[1786]: Accepted publickey for core from 4.153.228.146 port 58016 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:41.162198 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:41.176517 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:01:41.177091 systemd-logind[1582]: New session 1 of user core. Jan 17 00:01:41.182400 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:01:41.196010 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:01:41.208482 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:01:41.211622 (systemd)[1792]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:01:41.327984 systemd[1792]: Queued start job for default target default.target. Jan 17 00:01:41.329185 systemd[1792]: Created slice app.slice - User Application Slice. Jan 17 00:01:41.329314 systemd[1792]: Reached target paths.target - Paths. Jan 17 00:01:41.329387 systemd[1792]: Reached target timers.target - Timers. Jan 17 00:01:41.336303 systemd[1792]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:01:41.346619 systemd[1792]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:01:41.346706 systemd[1792]: Reached target sockets.target - Sockets. Jan 17 00:01:41.346718 systemd[1792]: Reached target basic.target - Basic System. Jan 17 00:01:41.346775 systemd[1792]: Reached target default.target - Main User Target. Jan 17 00:01:41.346802 systemd[1792]: Startup finished in 127ms. Jan 17 00:01:41.347243 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:01:41.355670 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:01:41.799419 systemd[1]: Started sshd@4-46.224.97.13:22-4.153.228.146:58026.service - OpenSSH per-connection server daemon (4.153.228.146:58026). Jan 17 00:01:42.403075 sshd[1804]: Accepted publickey for core from 4.153.228.146 port 58026 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:42.405205 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:42.411246 systemd-logind[1582]: New session 2 of user core. Jan 17 00:01:42.417672 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:01:42.836519 sshd[1804]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:42.842314 systemd[1]: sshd@4-46.224.97.13:22-4.153.228.146:58026.service: Deactivated successfully. Jan 17 00:01:42.846390 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:01:42.847217 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:01:42.848329 systemd-logind[1582]: Removed session 2. Jan 17 00:01:42.948534 systemd[1]: Started sshd@5-46.224.97.13:22-4.153.228.146:58036.service - OpenSSH per-connection server daemon (4.153.228.146:58036). Jan 17 00:01:43.572044 sshd[1812]: Accepted publickey for core from 4.153.228.146 port 58036 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:43.573971 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:43.580496 systemd-logind[1582]: New session 3 of user core. Jan 17 00:01:43.592752 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:01:44.012318 sshd[1812]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:44.017418 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:01:44.018316 systemd[1]: sshd@5-46.224.97.13:22-4.153.228.146:58036.service: Deactivated successfully. Jan 17 00:01:44.022456 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:01:44.024290 systemd-logind[1582]: Removed session 3. Jan 17 00:01:44.111555 systemd[1]: Started sshd@6-46.224.97.13:22-4.153.228.146:58038.service - OpenSSH per-connection server daemon (4.153.228.146:58038). Jan 17 00:01:44.710138 sshd[1820]: Accepted publickey for core from 4.153.228.146 port 58038 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:44.712104 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:44.717386 systemd-logind[1582]: New session 4 of user core. Jan 17 00:01:44.726736 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:01:45.136432 sshd[1820]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:45.140541 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:01:45.141385 systemd[1]: sshd@6-46.224.97.13:22-4.153.228.146:58038.service: Deactivated successfully. Jan 17 00:01:45.145780 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:01:45.147306 systemd-logind[1582]: Removed session 4. Jan 17 00:01:45.240615 systemd[1]: Started sshd@7-46.224.97.13:22-4.153.228.146:33082.service - OpenSSH per-connection server daemon (4.153.228.146:33082). Jan 17 00:01:45.836341 sshd[1828]: Accepted publickey for core from 4.153.228.146 port 33082 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:45.838651 sshd[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:45.844245 systemd-logind[1582]: New session 5 of user core. Jan 17 00:01:45.849729 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:01:46.178101 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:01:46.178467 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:46.197431 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:46.294666 sshd[1828]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:46.300482 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:01:46.301583 systemd[1]: sshd@7-46.224.97.13:22-4.153.228.146:33082.service: Deactivated successfully. Jan 17 00:01:46.304464 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:01:46.305422 systemd-logind[1582]: Removed session 5. Jan 17 00:01:46.403818 systemd[1]: Started sshd@8-46.224.97.13:22-4.153.228.146:33090.service - OpenSSH per-connection server daemon (4.153.228.146:33090). Jan 17 00:01:47.008287 sshd[1837]: Accepted publickey for core from 4.153.228.146 port 33090 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:47.010859 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:47.015808 systemd-logind[1582]: New session 6 of user core. Jan 17 00:01:47.028686 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:01:47.340526 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:01:47.340828 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:47.345148 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:47.351571 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:01:47.351853 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:47.368097 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:47.382370 auditctl[1845]: No rules Jan 17 00:01:47.383024 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:01:47.383443 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:47.395091 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:47.422314 augenrules[1864]: No rules Jan 17 00:01:47.425619 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:47.428500 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:47.525297 sshd[1837]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:47.531248 systemd[1]: sshd@8-46.224.97.13:22-4.153.228.146:33090.service: Deactivated successfully. Jan 17 00:01:47.534837 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:01:47.536083 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:01:47.539031 systemd-logind[1582]: Removed session 6. Jan 17 00:01:47.598466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:01:47.604431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:47.627612 systemd[1]: Started sshd@9-46.224.97.13:22-4.153.228.146:33098.service - OpenSSH per-connection server daemon (4.153.228.146:33098). Jan 17 00:01:47.744053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:47.757743 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:47.807660 kubelet[1887]: E0117 00:01:47.807587 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:47.812475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:47.812797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:48.227578 sshd[1877]: Accepted publickey for core from 4.153.228.146 port 33098 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:01:48.230211 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:48.237979 systemd-logind[1582]: New session 7 of user core. Jan 17 00:01:48.244730 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:01:48.557600 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:01:48.557938 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:48.856947 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:01:48.865042 (dockerd)[1912]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:01:49.115154 dockerd[1912]: time="2026-01-17T00:01:49.113491329Z" level=info msg="Starting up" Jan 17 00:01:49.213262 dockerd[1912]: time="2026-01-17T00:01:49.213203995Z" level=info msg="Loading containers: start." Jan 17 00:01:49.323172 kernel: Initializing XFRM netlink socket Jan 17 00:01:49.416787 systemd-networkd[1244]: docker0: Link UP Jan 17 00:01:49.435145 dockerd[1912]: time="2026-01-17T00:01:49.434780890Z" level=info msg="Loading containers: done." Jan 17 00:01:49.454992 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck842034005-merged.mount: Deactivated successfully. Jan 17 00:01:49.458592 dockerd[1912]: time="2026-01-17T00:01:49.458533288Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:01:49.458699 dockerd[1912]: time="2026-01-17T00:01:49.458650485Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:01:49.458792 dockerd[1912]: time="2026-01-17T00:01:49.458772282Z" level=info msg="Daemon has completed initialization" Jan 17 00:01:49.500281 dockerd[1912]: time="2026-01-17T00:01:49.500140444Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:01:49.500792 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:01:50.419407 update_engine[1587]: I20260117 00:01:50.419221 1587 update_attempter.cc:509] Updating boot flags... Jan 17 00:01:50.461151 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2058) Jan 17 00:01:50.528160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1982) Jan 17 00:01:50.567626 containerd[1598]: time="2026-01-17T00:01:50.567303960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:01:51.166831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731422518.mount: Deactivated successfully. Jan 17 00:01:52.181956 containerd[1598]: time="2026-01-17T00:01:52.180884389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.182862 containerd[1598]: time="2026-01-17T00:01:52.182828546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 17 00:01:52.184642 containerd[1598]: time="2026-01-17T00:01:52.184596146Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.188314 containerd[1598]: time="2026-01-17T00:01:52.188273585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.190520 containerd[1598]: time="2026-01-17T00:01:52.189533357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.622185718s" Jan 17 00:01:52.190520 containerd[1598]: time="2026-01-17T00:01:52.189579036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:01:52.191292 containerd[1598]: time="2026-01-17T00:01:52.191266198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:01:53.581539 containerd[1598]: time="2026-01-17T00:01:53.581463752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.584709 containerd[1598]: time="2026-01-17T00:01:53.584246534Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 17 00:01:53.587152 containerd[1598]: time="2026-01-17T00:01:53.585841981Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.592377 containerd[1598]: time="2026-01-17T00:01:53.592215328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.593321 containerd[1598]: time="2026-01-17T00:01:53.593274826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.401866871s" Jan 17 00:01:53.593321 containerd[1598]: time="2026-01-17T00:01:53.593317145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:01:53.593926 containerd[1598]: time="2026-01-17T00:01:53.593868453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:01:54.570367 containerd[1598]: time="2026-01-17T00:01:54.570314690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:54.571427 containerd[1598]: time="2026-01-17T00:01:54.571395389Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 17 00:01:54.572698 containerd[1598]: time="2026-01-17T00:01:54.572207253Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:54.576333 containerd[1598]: time="2026-01-17T00:01:54.576299253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:54.581345 containerd[1598]: time="2026-01-17T00:01:54.581308035Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 986.10189ms" Jan 17 00:01:54.581468 containerd[1598]: time="2026-01-17T00:01:54.581453073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:01:54.582072 containerd[1598]: time="2026-01-17T00:01:54.582049821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:01:55.621273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684058903.mount: Deactivated successfully. Jan 17 00:01:55.970866 containerd[1598]: time="2026-01-17T00:01:55.970714239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.973329 containerd[1598]: time="2026-01-17T00:01:55.973293312Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 17 00:01:55.974895 containerd[1598]: time="2026-01-17T00:01:55.974865243Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.977431 containerd[1598]: time="2026-01-17T00:01:55.977368797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.979929 containerd[1598]: time="2026-01-17T00:01:55.979863832Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.397770972s" Jan 17 00:01:55.980378 containerd[1598]: time="2026-01-17T00:01:55.980182186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:01:55.981431 containerd[1598]: time="2026-01-17T00:01:55.981237646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:01:56.533267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488628439.mount: Deactivated successfully. Jan 17 00:01:57.280547 containerd[1598]: time="2026-01-17T00:01:57.280459941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.283286 containerd[1598]: time="2026-01-17T00:01:57.283220497Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 17 00:01:57.284147 containerd[1598]: time="2026-01-17T00:01:57.284004204Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.287956 containerd[1598]: time="2026-01-17T00:01:57.287891421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.292544 containerd[1598]: time="2026-01-17T00:01:57.291965876Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.31069323s" Jan 17 00:01:57.292544 containerd[1598]: time="2026-01-17T00:01:57.292020915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:01:57.293332 containerd[1598]: time="2026-01-17T00:01:57.293284854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:01:57.824001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:01:57.832500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:57.834046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73193153.mount: Deactivated successfully. Jan 17 00:01:57.846138 containerd[1598]: time="2026-01-17T00:01:57.845625471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.847560 containerd[1598]: time="2026-01-17T00:01:57.847363163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 17 00:01:57.848875 containerd[1598]: time="2026-01-17T00:01:57.848545304Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.853144 containerd[1598]: time="2026-01-17T00:01:57.851553215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.853144 containerd[1598]: time="2026-01-17T00:01:57.852819995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.322664ms" Jan 17 00:01:57.853144 containerd[1598]: time="2026-01-17T00:01:57.852857474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:01:57.853881 containerd[1598]: time="2026-01-17T00:01:57.853756220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:01:57.963356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:57.967806 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:58.013172 kubelet[2209]: E0117 00:01:58.013093 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:58.015880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:58.016083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:58.428409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903933162.mount: Deactivated successfully. Jan 17 00:02:00.167713 containerd[1598]: time="2026-01-17T00:02:00.167611454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.170534 containerd[1598]: time="2026-01-17T00:02:00.170471576Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 17 00:02:00.172327 containerd[1598]: time="2026-01-17T00:02:00.172249992Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.176154 containerd[1598]: time="2026-01-17T00:02:00.175618707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.177742 containerd[1598]: time="2026-01-17T00:02:00.177585001Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.323785382s" Jan 17 00:02:00.177742 containerd[1598]: time="2026-01-17T00:02:00.177629920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:02:07.272049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:07.282893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:07.321151 systemd[1]: Reloading requested from client PID 2299 ('systemctl') (unit session-7.scope)... Jan 17 00:02:07.321329 systemd[1]: Reloading... Jan 17 00:02:07.438218 zram_generator::config[2342]: No configuration found. Jan 17 00:02:07.548373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:07.618292 systemd[1]: Reloading finished in 296 ms. Jan 17 00:02:07.669762 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:02:07.670043 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:02:07.670594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:07.678690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:07.809381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:07.824600 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:07.868140 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:07.868140 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:07.868140 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:07.868140 kubelet[2401]: I0117 00:02:07.866961 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:09.919341 kubelet[2401]: I0117 00:02:09.919296 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:02:09.919762 kubelet[2401]: I0117 00:02:09.919748 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:09.920142 kubelet[2401]: I0117 00:02:09.920122 2401 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:02:09.947418 kubelet[2401]: E0117 00:02:09.947349 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.224.97.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:09.950486 kubelet[2401]: I0117 00:02:09.950457 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:09.956496 kubelet[2401]: E0117 00:02:09.956443 2401 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:09.956496 kubelet[2401]: I0117 00:02:09.956480 2401 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:09.960430 kubelet[2401]: I0117 00:02:09.960391 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:02:09.960828 kubelet[2401]: I0117 00:02:09.960780 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:09.961039 kubelet[2401]: I0117 00:02:09.960811 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-ce65c18e74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:02:09.961191 kubelet[2401]: I0117 00:02:09.961099 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:09.961191 kubelet[2401]: I0117 00:02:09.961127 2401 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:02:09.961382 kubelet[2401]: I0117 00:02:09.961342 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:09.965118 kubelet[2401]: I0117 00:02:09.965060 2401 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:02:09.965242 kubelet[2401]: I0117 00:02:09.965095 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:09.965242 kubelet[2401]: I0117 00:02:09.965231 2401 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:02:09.965332 kubelet[2401]: I0117 00:02:09.965250 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:09.970151 kubelet[2401]: W0117 00:02:09.969820 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.97.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-ce65c18e74&limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:09.970151 kubelet[2401]: E0117 00:02:09.969966 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.97.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-ce65c18e74&limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:09.970151 kubelet[2401]: I0117 00:02:09.970093 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:09.972138 kubelet[2401]: I0117 00:02:09.971366 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:02:09.972138 kubelet[2401]: W0117 00:02:09.971525 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:02:09.973817 kubelet[2401]: I0117 00:02:09.973361 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:02:09.973817 kubelet[2401]: I0117 00:02:09.973421 2401 server.go:1287] "Started kubelet" Jan 17 00:02:09.977629 kubelet[2401]: E0117 00:02:09.977358 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.97.13:6443/api/v1/namespaces/default/events\": dial tcp 46.224.97.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-ce65c18e74.188b5bb945ee8bfd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-ce65c18e74,UID:ci-4081-3-6-n-ce65c18e74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-ce65c18e74,},FirstTimestamp:2026-01-17 00:02:09.973390333 +0000 UTC m=+2.144847353,LastTimestamp:2026-01-17 00:02:09.973390333 +0000 UTC m=+2.144847353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-ce65c18e74,}" Jan 17 00:02:09.977796 kubelet[2401]: W0117 00:02:09.977661 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.97.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:09.977796 kubelet[2401]: E0117 00:02:09.977703 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.97.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:09.979480 kubelet[2401]: I0117 00:02:09.979316 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:09.981973 kubelet[2401]: I0117 00:02:09.981916 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:09.983547 kubelet[2401]: I0117 00:02:09.982790 2401 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:02:09.983857 kubelet[2401]: I0117 00:02:09.983839 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:02:09.985398 kubelet[2401]: I0117 00:02:09.984067 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:09.985635 kubelet[2401]: I0117 00:02:09.985605 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:09.985687 kubelet[2401]: I0117 00:02:09.984462 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:09.985802 kubelet[2401]: I0117 00:02:09.985747 2401 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:02:09.985893 kubelet[2401]: E0117 00:02:09.985863 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" Jan 17 00:02:09.987716 kubelet[2401]: I0117 00:02:09.987691 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:02:09.987982 kubelet[2401]: E0117 00:02:09.987943 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.97.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ce65c18e74?timeout=10s\": dial tcp 46.224.97.13:6443: connect: connection refused" interval="200ms" Jan 17 00:02:09.988485 kubelet[2401]: I0117 00:02:09.988445 2401 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:02:09.988593 kubelet[2401]: I0117 00:02:09.988570 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:09.991839 kubelet[2401]: E0117 00:02:09.991782 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:09.992267 kubelet[2401]: I0117 00:02:09.992171 2401 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:02:10.001455 kubelet[2401]: W0117 00:02:10.001310 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.97.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:10.001455 kubelet[2401]: E0117 00:02:10.001384 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.97.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:10.003578 kubelet[2401]: I0117 00:02:10.003436 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:10.006594 kubelet[2401]: I0117 00:02:10.006261 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:10.006594 kubelet[2401]: I0117 00:02:10.006290 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:02:10.006594 kubelet[2401]: I0117 00:02:10.006320 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:10.006594 kubelet[2401]: I0117 00:02:10.006326 2401 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:02:10.006594 kubelet[2401]: E0117 00:02:10.006372 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:10.019060 kubelet[2401]: E0117 00:02:10.018937 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.97.13:6443/api/v1/namespaces/default/events\": dial tcp 46.224.97.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-ce65c18e74.188b5bb945ee8bfd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-ce65c18e74,UID:ci-4081-3-6-n-ce65c18e74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-ce65c18e74,},FirstTimestamp:2026-01-17 00:02:09.973390333 +0000 UTC m=+2.144847353,LastTimestamp:2026-01-17 00:02:09.973390333 +0000 UTC m=+2.144847353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-ce65c18e74,}" Jan 17 00:02:10.019972 kubelet[2401]: W0117 00:02:10.019934 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.224.97.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:10.020271 kubelet[2401]: E0117 00:02:10.020093 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.224.97.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:10.020563 kubelet[2401]: I0117 00:02:10.020546 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:10.020956 kubelet[2401]: I0117 00:02:10.020640 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:10.020956 kubelet[2401]: I0117 00:02:10.020671 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:10.023469 kubelet[2401]: I0117 00:02:10.022998 2401 policy_none.go:49] "None policy: Start" Jan 17 00:02:10.023469 kubelet[2401]: I0117 00:02:10.023036 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:02:10.023469 kubelet[2401]: I0117 00:02:10.023058 2401 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:02:10.031165 kubelet[2401]: I0117 00:02:10.029454 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:02:10.031165 kubelet[2401]: I0117 00:02:10.029659 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:10.031165 kubelet[2401]: I0117 00:02:10.029679 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:10.031349 kubelet[2401]: I0117 00:02:10.031335 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:10.034537 kubelet[2401]: E0117 00:02:10.034513 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:10.034974 kubelet[2401]: E0117 00:02:10.034949 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-ce65c18e74\" not found" Jan 17 00:02:10.115610 kubelet[2401]: E0117 00:02:10.115544 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.120807 kubelet[2401]: E0117 00:02:10.120586 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.121426 kubelet[2401]: E0117 00:02:10.121404 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.131969 kubelet[2401]: I0117 00:02:10.131907 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.132498 kubelet[2401]: E0117 00:02:10.132460 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.97.13:6443/api/v1/nodes\": dial tcp 46.224.97.13:6443: connect: connection refused" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.188712 kubelet[2401]: E0117 00:02:10.188560 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.97.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ce65c18e74?timeout=10s\": dial tcp 46.224.97.13:6443: connect: connection refused" interval="400ms" Jan 17 00:02:10.287418 kubelet[2401]: I0117 00:02:10.287098 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287418 kubelet[2401]: I0117 00:02:10.287206 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287418 kubelet[2401]: I0117 00:02:10.287251 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287418 kubelet[2401]: I0117 00:02:10.287290 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287743 kubelet[2401]: I0117 00:02:10.287391 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287743 kubelet[2401]: I0117 00:02:10.287543 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287743 kubelet[2401]: I0117 00:02:10.287583 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287743 kubelet[2401]: I0117 00:02:10.287621 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad3d76dc5692238c562b8a8922a5548b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-ce65c18e74\" (UID: \"ad3d76dc5692238c562b8a8922a5548b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.287743 kubelet[2401]: I0117 00:02:10.287654 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.336075 kubelet[2401]: I0117 00:02:10.335982 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.337191 kubelet[2401]: E0117 00:02:10.337133 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.97.13:6443/api/v1/nodes\": dial tcp 46.224.97.13:6443: connect: connection refused" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.420477 containerd[1598]: time="2026-01-17T00:02:10.420360726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-ce65c18e74,Uid:ad3d76dc5692238c562b8a8922a5548b,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:10.422396 containerd[1598]: time="2026-01-17T00:02:10.422164314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-ce65c18e74,Uid:984aa048de2b1d8b1df411b858fb274b,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:10.423173 containerd[1598]: time="2026-01-17T00:02:10.422976748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-ce65c18e74,Uid:fa6f4ec312854a2d97991fe95eba0255,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:10.590947 kubelet[2401]: E0117 00:02:10.590760 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.97.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ce65c18e74?timeout=10s\": dial tcp 46.224.97.13:6443: connect: connection refused" interval="800ms" Jan 17 00:02:10.740150 kubelet[2401]: I0117 00:02:10.739929 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.740355 kubelet[2401]: E0117 00:02:10.740311 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.97.13:6443/api/v1/nodes\": dial tcp 46.224.97.13:6443: connect: connection refused" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:10.832732 kubelet[2401]: W0117 00:02:10.832645 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.97.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-ce65c18e74&limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:10.832907 kubelet[2401]: E0117 00:02:10.832759 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.97.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-ce65c18e74&limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:10.930275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149463574.mount: Deactivated successfully. Jan 17 00:02:10.938705 containerd[1598]: time="2026-01-17T00:02:10.938597196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:10.940610 containerd[1598]: time="2026-01-17T00:02:10.940564902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:10.940726 containerd[1598]: time="2026-01-17T00:02:10.940672462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:10.942749 containerd[1598]: time="2026-01-17T00:02:10.942717647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:10.943535 containerd[1598]: time="2026-01-17T00:02:10.943505722Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:10.944903 containerd[1598]: time="2026-01-17T00:02:10.944777513Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:10.946171 containerd[1598]: time="2026-01-17T00:02:10.945631787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 17 00:02:10.948263 containerd[1598]: time="2026-01-17T00:02:10.948225889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:10.950025 containerd[1598]: time="2026-01-17T00:02:10.949978837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.734524ms" Jan 17 00:02:10.953680 containerd[1598]: time="2026-01-17T00:02:10.953643571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.603143ms" Jan 17 00:02:10.954899 containerd[1598]: time="2026-01-17T00:02:10.954864203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.356118ms" Jan 17 00:02:11.079565 containerd[1598]: time="2026-01-17T00:02:11.079297690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:11.079565 containerd[1598]: time="2026-01-17T00:02:11.079350370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:11.079565 containerd[1598]: time="2026-01-17T00:02:11.079375730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.079565 containerd[1598]: time="2026-01-17T00:02:11.079466089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.084328 containerd[1598]: time="2026-01-17T00:02:11.084092739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:11.084328 containerd[1598]: time="2026-01-17T00:02:11.084207298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:11.084328 containerd[1598]: time="2026-01-17T00:02:11.084227938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.084958 containerd[1598]: time="2026-01-17T00:02:11.084671295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.091294 containerd[1598]: time="2026-01-17T00:02:11.091171893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:11.091294 containerd[1598]: time="2026-01-17T00:02:11.091237732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:11.091294 containerd[1598]: time="2026-01-17T00:02:11.091264972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.091590 containerd[1598]: time="2026-01-17T00:02:11.091403251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:11.123196 kubelet[2401]: W0117 00:02:11.122628 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.97.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:11.123196 kubelet[2401]: E0117 00:02:11.122690 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.97.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:11.158505 containerd[1598]: time="2026-01-17T00:02:11.158242495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-ce65c18e74,Uid:fa6f4ec312854a2d97991fe95eba0255,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc37c636dd56211e55268554653b6c94ad455db35be035e7a15977cc262a60a\"" Jan 17 00:02:11.164519 containerd[1598]: time="2026-01-17T00:02:11.164343815Z" level=info msg="CreateContainer within sandbox \"3bc37c636dd56211e55268554653b6c94ad455db35be035e7a15977cc262a60a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:02:11.179039 containerd[1598]: time="2026-01-17T00:02:11.178838000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-ce65c18e74,Uid:984aa048de2b1d8b1df411b858fb274b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe7aad1eaf200837f690db3e3d4f9b3b2cbb3796088663207af64f12554b3396\"" Jan 17 00:02:11.182828 containerd[1598]: time="2026-01-17T00:02:11.182470536Z" level=info msg="CreateContainer within sandbox \"fe7aad1eaf200837f690db3e3d4f9b3b2cbb3796088663207af64f12554b3396\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:02:11.183831 containerd[1598]: time="2026-01-17T00:02:11.183794288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-ce65c18e74,Uid:ad3d76dc5692238c562b8a8922a5548b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef47532ba2c0e829172df77a8b4d5258950a0edf597dd83c5587fa7837993f1\"" Jan 17 00:02:11.190702 containerd[1598]: time="2026-01-17T00:02:11.190333325Z" level=info msg="CreateContainer within sandbox \"4ef47532ba2c0e829172df77a8b4d5258950a0edf597dd83c5587fa7837993f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:02:11.194390 containerd[1598]: time="2026-01-17T00:02:11.194077061Z" level=info msg="CreateContainer within sandbox \"3bc37c636dd56211e55268554653b6c94ad455db35be035e7a15977cc262a60a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405\"" Jan 17 00:02:11.195310 containerd[1598]: time="2026-01-17T00:02:11.195270013Z" level=info msg="StartContainer for \"7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405\"" Jan 17 00:02:11.203638 containerd[1598]: time="2026-01-17T00:02:11.203532519Z" level=info msg="CreateContainer within sandbox \"fe7aad1eaf200837f690db3e3d4f9b3b2cbb3796088663207af64f12554b3396\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0386a6230badef2c212428f749bdfe6684f844f7b7f7c236e919cba260937d2\"" Jan 17 00:02:11.204450 containerd[1598]: time="2026-01-17T00:02:11.204357193Z" level=info msg="StartContainer for \"b0386a6230badef2c212428f749bdfe6684f844f7b7f7c236e919cba260937d2\"" Jan 17 00:02:11.205813 containerd[1598]: time="2026-01-17T00:02:11.205500346Z" level=info msg="CreateContainer within sandbox \"4ef47532ba2c0e829172df77a8b4d5258950a0edf597dd83c5587fa7837993f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4\"" Jan 17 00:02:11.207182 containerd[1598]: time="2026-01-17T00:02:11.206135022Z" level=info msg="StartContainer for \"0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4\"" Jan 17 00:02:11.246023 kubelet[2401]: W0117 00:02:11.245955 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.97.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.97.13:6443: connect: connection refused Jan 17 00:02:11.246159 kubelet[2401]: E0117 00:02:11.246027 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.97.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.97.13:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:11.293171 containerd[1598]: time="2026-01-17T00:02:11.293107214Z" level=info msg="StartContainer for \"7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405\" returns successfully" Jan 17 00:02:11.306797 containerd[1598]: time="2026-01-17T00:02:11.306560646Z" level=info msg="StartContainer for \"b0386a6230badef2c212428f749bdfe6684f844f7b7f7c236e919cba260937d2\" returns successfully" Jan 17 00:02:11.325889 containerd[1598]: time="2026-01-17T00:02:11.325832760Z" level=info msg="StartContainer for \"0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4\" returns successfully" Jan 17 00:02:11.393500 kubelet[2401]: E0117 00:02:11.393283 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.97.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ce65c18e74?timeout=10s\": dial tcp 46.224.97.13:6443: connect: connection refused" interval="1.6s" Jan 17 00:02:11.544702 kubelet[2401]: I0117 00:02:11.544614 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:12.035170 kubelet[2401]: E0117 00:02:12.033724 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:12.038149 kubelet[2401]: E0117 00:02:12.036792 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:12.041955 kubelet[2401]: E0117 00:02:12.041916 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:13.046494 kubelet[2401]: E0117 00:02:13.046454 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:13.048443 kubelet[2401]: E0117 00:02:13.047089 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ce65c18e74\" not found" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.311727 kubelet[2401]: I0117 00:02:14.311353 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.389127 kubelet[2401]: I0117 00:02:14.388569 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.445259 kubelet[2401]: E0117 00:02:14.445200 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.445259 kubelet[2401]: I0117 00:02:14.445248 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.450362 kubelet[2401]: E0117 00:02:14.450307 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.450362 kubelet[2401]: I0117 00:02:14.450348 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.454125 kubelet[2401]: E0117 00:02:14.453375 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ce65c18e74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:14.980575 kubelet[2401]: I0117 00:02:14.980523 2401 apiserver.go:52] "Watching apiserver" Jan 17 00:02:14.988831 kubelet[2401]: I0117 00:02:14.988756 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:02:15.071137 kubelet[2401]: I0117 00:02:15.070619 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:15.964978 kubelet[2401]: I0117 00:02:15.964913 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:16.271720 systemd[1]: Reloading requested from client PID 2680 ('systemctl') (unit session-7.scope)... Jan 17 00:02:16.271735 systemd[1]: Reloading... Jan 17 00:02:16.373145 zram_generator::config[2722]: No configuration found. Jan 17 00:02:16.486693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:16.582537 systemd[1]: Reloading finished in 310 ms. Jan 17 00:02:16.614198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:16.629933 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:02:16.630887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:16.641588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:16.779288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:16.792567 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:16.851136 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:16.852176 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:16.852176 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:16.852176 kubelet[2777]: I0117 00:02:16.851568 2777 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:16.859653 kubelet[2777]: I0117 00:02:16.859614 2777 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:02:16.859809 kubelet[2777]: I0117 00:02:16.859798 2777 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:16.860165 kubelet[2777]: I0117 00:02:16.860147 2777 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:02:16.861632 kubelet[2777]: I0117 00:02:16.861610 2777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:02:16.864337 kubelet[2777]: I0117 00:02:16.864315 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:16.869680 kubelet[2777]: E0117 00:02:16.869652 2777 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:16.869871 kubelet[2777]: I0117 00:02:16.869853 2777 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:16.872920 kubelet[2777]: I0117 00:02:16.872901 2777 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:02:16.873650 kubelet[2777]: I0117 00:02:16.873594 2777 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:16.873968 kubelet[2777]: I0117 00:02:16.873742 2777 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-ce65c18e74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:02:16.874143 kubelet[2777]: I0117 00:02:16.874128 2777 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:16.874291 kubelet[2777]: I0117 00:02:16.874210 2777 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:02:16.874335 kubelet[2777]: I0117 00:02:16.874269 2777 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:16.874571 kubelet[2777]: I0117 00:02:16.874558 2777 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:02:16.875333 kubelet[2777]: I0117 00:02:16.875311 2777 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:16.875461 kubelet[2777]: I0117 00:02:16.875424 2777 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:02:16.875461 kubelet[2777]: I0117 00:02:16.875439 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:16.890773 kubelet[2777]: I0117 00:02:16.889369 2777 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:16.890773 kubelet[2777]: I0117 00:02:16.889869 2777 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:02:16.890773 kubelet[2777]: I0117 00:02:16.890305 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:02:16.890773 kubelet[2777]: I0117 00:02:16.890331 2777 server.go:1287] "Started kubelet" Jan 17 00:02:16.899390 kubelet[2777]: I0117 00:02:16.899368 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:16.901075 kubelet[2777]: I0117 00:02:16.901036 2777 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:16.902082 kubelet[2777]: I0117 00:02:16.902058 2777 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:02:16.903302 kubelet[2777]: I0117 00:02:16.903241 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:16.904030 kubelet[2777]: I0117 00:02:16.903439 2777 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:16.904030 kubelet[2777]: I0117 00:02:16.903621 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:16.906422 kubelet[2777]: E0117 00:02:16.906394 2777 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:16.906762 kubelet[2777]: I0117 00:02:16.906734 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:02:16.908403 kubelet[2777]: I0117 00:02:16.908357 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:02:16.908522 kubelet[2777]: I0117 00:02:16.908503 2777 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:02:16.910902 kubelet[2777]: I0117 00:02:16.910860 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:16.914363 kubelet[2777]: I0117 00:02:16.914336 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:16.914470 kubelet[2777]: I0117 00:02:16.914457 2777 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:02:16.914559 kubelet[2777]: I0117 00:02:16.914545 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:16.914623 kubelet[2777]: I0117 00:02:16.914613 2777 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:02:16.914769 kubelet[2777]: E0117 00:02:16.914748 2777 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:16.920162 kubelet[2777]: I0117 00:02:16.920143 2777 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:02:16.920353 kubelet[2777]: I0117 00:02:16.920265 2777 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:02:16.920459 kubelet[2777]: I0117 00:02:16.920441 2777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:16.993748 kubelet[2777]: I0117 00:02:16.993715 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:16.993748 kubelet[2777]: I0117 00:02:16.993736 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:16.993748 kubelet[2777]: I0117 00:02:16.993756 2777 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:16.994057 kubelet[2777]: I0117 00:02:16.993968 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:02:16.994057 kubelet[2777]: I0117 00:02:16.993979 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:02:16.994057 kubelet[2777]: I0117 00:02:16.993997 2777 policy_none.go:49] "None policy: Start" Jan 17 00:02:16.994057 kubelet[2777]: I0117 00:02:16.994006 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:02:16.994057 kubelet[2777]: I0117 00:02:16.994015 2777 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:02:16.994242 kubelet[2777]: I0117 00:02:16.994152 2777 state_mem.go:75] "Updated machine memory state" Jan 17 00:02:16.997142 kubelet[2777]: I0117 00:02:16.995381 2777 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:02:16.997142 kubelet[2777]: I0117 00:02:16.995543 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:16.997142 kubelet[2777]: I0117 00:02:16.995553 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:16.998836 kubelet[2777]: E0117 00:02:16.998744 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:16.999648 kubelet[2777]: I0117 00:02:16.999622 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:17.016505 kubelet[2777]: I0117 00:02:17.016456 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.017660 kubelet[2777]: I0117 00:02:17.017614 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.017893 kubelet[2777]: I0117 00:02:17.017869 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.031772 kubelet[2777]: E0117 00:02:17.031587 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.031772 kubelet[2777]: E0117 00:02:17.031648 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ce65c18e74\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.103627 kubelet[2777]: I0117 00:02:17.103467 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.119491 kubelet[2777]: I0117 00:02:17.119439 2777 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.119614 kubelet[2777]: I0117 00:02:17.119558 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211049 kubelet[2777]: I0117 00:02:17.210495 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211049 kubelet[2777]: I0117 00:02:17.210577 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211049 kubelet[2777]: I0117 00:02:17.210631 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211049 kubelet[2777]: I0117 00:02:17.210674 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211049 kubelet[2777]: I0117 00:02:17.210718 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211527 kubelet[2777]: I0117 00:02:17.210759 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad3d76dc5692238c562b8a8922a5548b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-ce65c18e74\" (UID: \"ad3d76dc5692238c562b8a8922a5548b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211527 kubelet[2777]: I0117 00:02:17.210816 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/984aa048de2b1d8b1df411b858fb274b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" (UID: \"984aa048de2b1d8b1df411b858fb274b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211527 kubelet[2777]: I0117 00:02:17.210859 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.211527 kubelet[2777]: I0117 00:02:17.210900 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa6f4ec312854a2d97991fe95eba0255-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-ce65c18e74\" (UID: \"fa6f4ec312854a2d97991fe95eba0255\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.266912 sudo[2808]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:02:17.267221 sudo[2808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:02:17.753573 sudo[2808]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:17.876746 kubelet[2777]: I0117 00:02:17.876705 2777 apiserver.go:52] "Watching apiserver" Jan 17 00:02:17.908919 kubelet[2777]: I0117 00:02:17.908853 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:02:17.958149 kubelet[2777]: I0117 00:02:17.958094 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.960152 kubelet[2777]: I0117 00:02:17.959046 2777 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.967399 kubelet[2777]: E0117 00:02:17.967042 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ce65c18e74\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.968175 kubelet[2777]: E0117 00:02:17.968149 2777 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-ce65c18e74\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" Jan 17 00:02:17.997347 kubelet[2777]: I0117 00:02:17.997264 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ce65c18e74" podStartSLOduration=0.997245756 podStartE2EDuration="997.245756ms" podCreationTimestamp="2026-01-17 00:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:17.987076002 +0000 UTC m=+1.188533232" watchObservedRunningTime="2026-01-17 00:02:17.997245756 +0000 UTC m=+1.198702946" Jan 17 00:02:18.011267 kubelet[2777]: I0117 00:02:18.010747 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ce65c18e74" podStartSLOduration=3.010727059 podStartE2EDuration="3.010727059s" podCreationTimestamp="2026-01-17 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:17.997589315 +0000 UTC m=+1.199046505" watchObservedRunningTime="2026-01-17 00:02:18.010727059 +0000 UTC m=+1.212184249" Jan 17 00:02:18.011267 kubelet[2777]: I0117 00:02:18.010939 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ce65c18e74" podStartSLOduration=3.010932178 podStartE2EDuration="3.010932178s" podCreationTimestamp="2026-01-17 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:18.010862939 +0000 UTC m=+1.212320129" watchObservedRunningTime="2026-01-17 00:02:18.010932178 +0000 UTC m=+1.212389368" Jan 17 00:02:19.912524 sudo[1897]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:20.008446 sshd[1877]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:20.014603 systemd[1]: sshd@9-46.224.97.13:22-4.153.228.146:33098.service: Deactivated successfully. Jan 17 00:02:20.018963 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:02:20.021186 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:02:20.023248 systemd-logind[1582]: Removed session 7. Jan 17 00:02:22.256166 kubelet[2777]: I0117 00:02:22.256019 2777 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:02:22.257591 containerd[1598]: time="2026-01-17T00:02:22.257485784Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:02:22.259246 kubelet[2777]: I0117 00:02:22.257813 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:02:23.248099 kubelet[2777]: I0117 00:02:23.248031 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-bpf-maps\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248099 kubelet[2777]: I0117 00:02:23.248091 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hubble-tls\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248142 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-etc-cni-netd\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248171 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hostproc\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248195 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-net\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248220 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e959f85e-8d4b-489c-b279-a5d441c78b5c-kube-proxy\") pod \"kube-proxy-4x2lz\" (UID: \"e959f85e-8d4b-489c-b279-a5d441c78b5c\") " pod="kube-system/kube-proxy-4x2lz" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248248 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csr67\" (UniqueName: \"kubernetes.io/projected/e959f85e-8d4b-489c-b279-a5d441c78b5c-kube-api-access-csr67\") pod \"kube-proxy-4x2lz\" (UID: \"e959f85e-8d4b-489c-b279-a5d441c78b5c\") " pod="kube-system/kube-proxy-4x2lz" Jan 17 00:02:23.248357 kubelet[2777]: I0117 00:02:23.248278 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e959f85e-8d4b-489c-b279-a5d441c78b5c-lib-modules\") pod \"kube-proxy-4x2lz\" (UID: \"e959f85e-8d4b-489c-b279-a5d441c78b5c\") " pod="kube-system/kube-proxy-4x2lz" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248299 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-cgroup\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248321 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-config-path\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248342 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-kernel\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248390 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-run\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248419 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-lib-modules\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248632 kubelet[2777]: I0117 00:02:23.248447 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-xtables-lock\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248965 kubelet[2777]: I0117 00:02:23.248470 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-clustermesh-secrets\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248965 kubelet[2777]: I0117 00:02:23.248493 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbhp\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-kube-api-access-8pbhp\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.248965 kubelet[2777]: I0117 00:02:23.248517 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e959f85e-8d4b-489c-b279-a5d441c78b5c-xtables-lock\") pod \"kube-proxy-4x2lz\" (UID: \"e959f85e-8d4b-489c-b279-a5d441c78b5c\") " pod="kube-system/kube-proxy-4x2lz" Jan 17 00:02:23.248965 kubelet[2777]: I0117 00:02:23.248618 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cni-path\") pod \"cilium-nqbl2\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " pod="kube-system/cilium-nqbl2" Jan 17 00:02:23.353177 kubelet[2777]: I0117 00:02:23.349789 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d4650a2-a476-48bb-a110-da6f904d041b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-96h5q\" (UID: \"5d4650a2-a476-48bb-a110-da6f904d041b\") " pod="kube-system/cilium-operator-6c4d7847fc-96h5q" Jan 17 00:02:23.353177 kubelet[2777]: I0117 00:02:23.349965 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p99\" (UniqueName: \"kubernetes.io/projected/5d4650a2-a476-48bb-a110-da6f904d041b-kube-api-access-t2p99\") pod \"cilium-operator-6c4d7847fc-96h5q\" (UID: \"5d4650a2-a476-48bb-a110-da6f904d041b\") " pod="kube-system/cilium-operator-6c4d7847fc-96h5q" Jan 17 00:02:23.475713 containerd[1598]: time="2026-01-17T00:02:23.475245929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x2lz,Uid:e959f85e-8d4b-489c-b279-a5d441c78b5c,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:23.488455 containerd[1598]: time="2026-01-17T00:02:23.488048011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqbl2,Uid:d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:23.505207 containerd[1598]: time="2026-01-17T00:02:23.504557361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:23.505207 containerd[1598]: time="2026-01-17T00:02:23.504626361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:23.505207 containerd[1598]: time="2026-01-17T00:02:23.504642001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.505207 containerd[1598]: time="2026-01-17T00:02:23.504746280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.512212 containerd[1598]: time="2026-01-17T00:02:23.511451820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:23.512212 containerd[1598]: time="2026-01-17T00:02:23.511586260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:23.512212 containerd[1598]: time="2026-01-17T00:02:23.511607220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.512384 containerd[1598]: time="2026-01-17T00:02:23.512208578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.563002 containerd[1598]: time="2026-01-17T00:02:23.562882625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqbl2,Uid:d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16,Namespace:kube-system,Attempt:0,} returns sandbox id \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\"" Jan 17 00:02:23.566851 containerd[1598]: time="2026-01-17T00:02:23.566594694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x2lz,Uid:e959f85e-8d4b-489c-b279-a5d441c78b5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"172e565e0e0cbd3cf6aec2588357e270edfd20469c31eac78b8ab3f066646e68\"" Jan 17 00:02:23.569350 containerd[1598]: time="2026-01-17T00:02:23.569017087Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:02:23.571676 containerd[1598]: time="2026-01-17T00:02:23.571626759Z" level=info msg="CreateContainer within sandbox \"172e565e0e0cbd3cf6aec2588357e270edfd20469c31eac78b8ab3f066646e68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:02:23.587734 containerd[1598]: time="2026-01-17T00:02:23.587569831Z" level=info msg="CreateContainer within sandbox \"172e565e0e0cbd3cf6aec2588357e270edfd20469c31eac78b8ab3f066646e68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70ac9e43d1f104ecf602533f159ce7c8d5a5965609ec13c77a2ab71b5ef0d1d5\"" Jan 17 00:02:23.588293 containerd[1598]: time="2026-01-17T00:02:23.588262949Z" level=info msg="StartContainer for \"70ac9e43d1f104ecf602533f159ce7c8d5a5965609ec13c77a2ab71b5ef0d1d5\"" Jan 17 00:02:23.626600 containerd[1598]: time="2026-01-17T00:02:23.626551314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-96h5q,Uid:5d4650a2-a476-48bb-a110-da6f904d041b,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:23.651572 containerd[1598]: time="2026-01-17T00:02:23.651427479Z" level=info msg="StartContainer for \"70ac9e43d1f104ecf602533f159ce7c8d5a5965609ec13c77a2ab71b5ef0d1d5\" returns successfully" Jan 17 00:02:23.662086 containerd[1598]: time="2026-01-17T00:02:23.661678768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:23.662370 containerd[1598]: time="2026-01-17T00:02:23.661779608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:23.663661 containerd[1598]: time="2026-01-17T00:02:23.663182923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.663661 containerd[1598]: time="2026-01-17T00:02:23.663568642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:23.718028 containerd[1598]: time="2026-01-17T00:02:23.717974518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-96h5q,Uid:5d4650a2-a476-48bb-a110-da6f904d041b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\"" Jan 17 00:02:27.706077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009327309.mount: Deactivated successfully. Jan 17 00:02:28.858095 kubelet[2777]: I0117 00:02:28.858025 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4x2lz" podStartSLOduration=5.857992605 podStartE2EDuration="5.857992605s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:23.989201542 +0000 UTC m=+7.190658772" watchObservedRunningTime="2026-01-17 00:02:28.857992605 +0000 UTC m=+12.059449795" Jan 17 00:02:29.114532 containerd[1598]: time="2026-01-17T00:02:29.114251461Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:29.116143 containerd[1598]: time="2026-01-17T00:02:29.115419259Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 17 00:02:29.118448 containerd[1598]: time="2026-01-17T00:02:29.118407853Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:29.126965 containerd[1598]: time="2026-01-17T00:02:29.126913476Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.55755711s" Jan 17 00:02:29.126965 containerd[1598]: time="2026-01-17T00:02:29.126969315Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 00:02:29.133518 containerd[1598]: time="2026-01-17T00:02:29.133259263Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:02:29.135166 containerd[1598]: time="2026-01-17T00:02:29.135096659Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:02:29.158867 containerd[1598]: time="2026-01-17T00:02:29.158779330Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\"" Jan 17 00:02:29.160256 containerd[1598]: time="2026-01-17T00:02:29.159916288Z" level=info msg="StartContainer for \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\"" Jan 17 00:02:29.190019 systemd[1]: run-containerd-runc-k8s.io-da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362-runc.6kZpgB.mount: Deactivated successfully. Jan 17 00:02:29.216568 containerd[1598]: time="2026-01-17T00:02:29.216523732Z" level=info msg="StartContainer for \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\" returns successfully" Jan 17 00:02:29.414801 containerd[1598]: time="2026-01-17T00:02:29.414354768Z" level=info msg="shim disconnected" id=da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362 namespace=k8s.io Jan 17 00:02:29.414801 containerd[1598]: time="2026-01-17T00:02:29.414445328Z" level=warning msg="cleaning up after shim disconnected" id=da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362 namespace=k8s.io Jan 17 00:02:29.414801 containerd[1598]: time="2026-01-17T00:02:29.414467368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:30.000797 containerd[1598]: time="2026-01-17T00:02:30.000626090Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:02:30.024702 containerd[1598]: time="2026-01-17T00:02:30.024610484Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\"" Jan 17 00:02:30.025363 containerd[1598]: time="2026-01-17T00:02:30.025334242Z" level=info msg="StartContainer for \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\"" Jan 17 00:02:30.080444 containerd[1598]: time="2026-01-17T00:02:30.080389177Z" level=info msg="StartContainer for \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\" returns successfully" Jan 17 00:02:30.093583 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:02:30.094381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:30.094460 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:30.102826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:02:30.132684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:02:30.147196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362-rootfs.mount: Deactivated successfully. Jan 17 00:02:30.151043 containerd[1598]: time="2026-01-17T00:02:30.150936842Z" level=info msg="shim disconnected" id=d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19 namespace=k8s.io Jan 17 00:02:30.151428 containerd[1598]: time="2026-01-17T00:02:30.151038002Z" level=warning msg="cleaning up after shim disconnected" id=d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19 namespace=k8s.io Jan 17 00:02:30.151428 containerd[1598]: time="2026-01-17T00:02:30.151063202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:31.006407 containerd[1598]: time="2026-01-17T00:02:31.006160764Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:02:31.037976 containerd[1598]: time="2026-01-17T00:02:31.037396028Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\"" Jan 17 00:02:31.038733 containerd[1598]: time="2026-01-17T00:02:31.038307626Z" level=info msg="StartContainer for \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\"" Jan 17 00:02:31.071994 systemd[1]: run-containerd-runc-k8s.io-95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad-runc.dNwsaI.mount: Deactivated successfully. Jan 17 00:02:31.104396 containerd[1598]: time="2026-01-17T00:02:31.104167188Z" level=info msg="StartContainer for \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\" returns successfully" Jan 17 00:02:31.137812 containerd[1598]: time="2026-01-17T00:02:31.137698568Z" level=info msg="shim disconnected" id=95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad namespace=k8s.io Jan 17 00:02:31.137812 containerd[1598]: time="2026-01-17T00:02:31.137783767Z" level=warning msg="cleaning up after shim disconnected" id=95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad namespace=k8s.io Jan 17 00:02:31.137812 containerd[1598]: time="2026-01-17T00:02:31.137807647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:31.145601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad-rootfs.mount: Deactivated successfully. Jan 17 00:02:31.296249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077108500.mount: Deactivated successfully. Jan 17 00:02:32.003934 containerd[1598]: time="2026-01-17T00:02:32.002768454Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:32.003934 containerd[1598]: time="2026-01-17T00:02:32.003858132Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 17 00:02:32.005141 containerd[1598]: time="2026-01-17T00:02:32.005085370Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:32.007638 containerd[1598]: time="2026-01-17T00:02:32.007596206Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.874287904s" Jan 17 00:02:32.007638 containerd[1598]: time="2026-01-17T00:02:32.007633886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 00:02:32.011567 containerd[1598]: time="2026-01-17T00:02:32.011090000Z" level=info msg="CreateContainer within sandbox \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:02:32.014869 containerd[1598]: time="2026-01-17T00:02:32.014816354Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:02:32.030404 containerd[1598]: time="2026-01-17T00:02:32.030351967Z" level=info msg="CreateContainer within sandbox \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\"" Jan 17 00:02:32.032146 containerd[1598]: time="2026-01-17T00:02:32.032093724Z" level=info msg="StartContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\"" Jan 17 00:02:32.049004 containerd[1598]: time="2026-01-17T00:02:32.048951016Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\"" Jan 17 00:02:32.051050 containerd[1598]: time="2026-01-17T00:02:32.051014773Z" level=info msg="StartContainer for \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\"" Jan 17 00:02:32.113464 containerd[1598]: time="2026-01-17T00:02:32.113412267Z" level=info msg="StartContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" returns successfully" Jan 17 00:02:32.126687 containerd[1598]: time="2026-01-17T00:02:32.126630045Z" level=info msg="StartContainer for \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\" returns successfully" Jan 17 00:02:32.151178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637457605.mount: Deactivated successfully. Jan 17 00:02:32.158923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c-rootfs.mount: Deactivated successfully. Jan 17 00:02:32.203164 containerd[1598]: time="2026-01-17T00:02:32.202840637Z" level=info msg="shim disconnected" id=3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c namespace=k8s.io Jan 17 00:02:32.203164 containerd[1598]: time="2026-01-17T00:02:32.202997677Z" level=warning msg="cleaning up after shim disconnected" id=3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c namespace=k8s.io Jan 17 00:02:32.203164 containerd[1598]: time="2026-01-17T00:02:32.203017997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:33.036477 containerd[1598]: time="2026-01-17T00:02:33.036394957Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:02:33.065469 kubelet[2777]: I0117 00:02:33.062457 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-96h5q" podStartSLOduration=1.7734585059999999 podStartE2EDuration="10.062438676s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="2026-01-17 00:02:23.719495354 +0000 UTC m=+6.920952504" lastFinishedPulling="2026-01-17 00:02:32.008475484 +0000 UTC m=+15.209932674" observedRunningTime="2026-01-17 00:02:33.060252599 +0000 UTC m=+16.261709829" watchObservedRunningTime="2026-01-17 00:02:33.062438676 +0000 UTC m=+16.263895866" Jan 17 00:02:33.083328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789100883.mount: Deactivated successfully. Jan 17 00:02:33.093348 containerd[1598]: time="2026-01-17T00:02:33.092329429Z" level=info msg="CreateContainer within sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\"" Jan 17 00:02:33.093348 containerd[1598]: time="2026-01-17T00:02:33.093029547Z" level=info msg="StartContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\"" Jan 17 00:02:33.178935 containerd[1598]: time="2026-01-17T00:02:33.178616092Z" level=info msg="StartContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" returns successfully" Jan 17 00:02:33.361229 kubelet[2777]: I0117 00:02:33.361154 2777 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:02:33.428191 kubelet[2777]: I0117 00:02:33.427597 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/580c7c8d-d74f-4d5b-abdd-0990fa8818ed-config-volume\") pod \"coredns-668d6bf9bc-zxpkc\" (UID: \"580c7c8d-d74f-4d5b-abdd-0990fa8818ed\") " pod="kube-system/coredns-668d6bf9bc-zxpkc" Jan 17 00:02:33.428191 kubelet[2777]: I0117 00:02:33.427697 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stq2x\" (UniqueName: \"kubernetes.io/projected/15e4afb8-b39e-4e45-a7e6-9435dc06badb-kube-api-access-stq2x\") pod \"coredns-668d6bf9bc-s6qrn\" (UID: \"15e4afb8-b39e-4e45-a7e6-9435dc06badb\") " pod="kube-system/coredns-668d6bf9bc-s6qrn" Jan 17 00:02:33.428191 kubelet[2777]: I0117 00:02:33.427755 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15e4afb8-b39e-4e45-a7e6-9435dc06badb-config-volume\") pod \"coredns-668d6bf9bc-s6qrn\" (UID: \"15e4afb8-b39e-4e45-a7e6-9435dc06badb\") " pod="kube-system/coredns-668d6bf9bc-s6qrn" Jan 17 00:02:33.428191 kubelet[2777]: I0117 00:02:33.427802 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kzh\" (UniqueName: \"kubernetes.io/projected/580c7c8d-d74f-4d5b-abdd-0990fa8818ed-kube-api-access-l7kzh\") pod \"coredns-668d6bf9bc-zxpkc\" (UID: \"580c7c8d-d74f-4d5b-abdd-0990fa8818ed\") " pod="kube-system/coredns-668d6bf9bc-zxpkc" Jan 17 00:02:33.707913 containerd[1598]: time="2026-01-17T00:02:33.705631540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qrn,Uid:15e4afb8-b39e-4e45-a7e6-9435dc06badb,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:33.709620 containerd[1598]: time="2026-01-17T00:02:33.708604376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxpkc,Uid:580c7c8d-d74f-4d5b-abdd-0990fa8818ed,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:34.063077 kubelet[2777]: I0117 00:02:34.062540 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nqbl2" podStartSLOduration=5.497319813 podStartE2EDuration="11.062513743s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="2026-01-17 00:02:23.565284978 +0000 UTC m=+6.766742168" lastFinishedPulling="2026-01-17 00:02:29.130478948 +0000 UTC m=+12.331936098" observedRunningTime="2026-01-17 00:02:34.061858304 +0000 UTC m=+17.263315574" watchObservedRunningTime="2026-01-17 00:02:34.062513743 +0000 UTC m=+17.263970933" Jan 17 00:02:36.187491 systemd-networkd[1244]: cilium_host: Link UP Jan 17 00:02:36.188817 systemd-networkd[1244]: cilium_net: Link UP Jan 17 00:02:36.189740 systemd-networkd[1244]: cilium_net: Gained carrier Jan 17 00:02:36.190555 systemd-networkd[1244]: cilium_host: Gained carrier Jan 17 00:02:36.295819 systemd-networkd[1244]: cilium_vxlan: Link UP Jan 17 00:02:36.295826 systemd-networkd[1244]: cilium_vxlan: Gained carrier Jan 17 00:02:36.474269 systemd-networkd[1244]: cilium_host: Gained IPv6LL Jan 17 00:02:36.582442 kernel: NET: Registered PF_ALG protocol family Jan 17 00:02:37.016368 systemd-networkd[1244]: cilium_net: Gained IPv6LL Jan 17 00:02:37.327666 systemd-networkd[1244]: lxc_health: Link UP Jan 17 00:02:37.327899 systemd-networkd[1244]: lxc_health: Gained carrier Jan 17 00:02:37.593285 systemd-networkd[1244]: cilium_vxlan: Gained IPv6LL Jan 17 00:02:37.771524 systemd-networkd[1244]: lxc2237e30213d3: Link UP Jan 17 00:02:37.778504 kernel: eth0: renamed from tmp512c8 Jan 17 00:02:37.785237 systemd-networkd[1244]: lxc2237e30213d3: Gained carrier Jan 17 00:02:37.810824 systemd-networkd[1244]: lxc8f6ed0d75725: Link UP Jan 17 00:02:37.816253 kernel: eth0: renamed from tmp30f74 Jan 17 00:02:37.824264 systemd-networkd[1244]: lxc8f6ed0d75725: Gained carrier Jan 17 00:02:39.384360 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jan 17 00:02:39.512409 systemd-networkd[1244]: lxc2237e30213d3: Gained IPv6LL Jan 17 00:02:39.640463 systemd-networkd[1244]: lxc8f6ed0d75725: Gained IPv6LL Jan 17 00:02:41.730197 containerd[1598]: time="2026-01-17T00:02:41.727496627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:41.730197 containerd[1598]: time="2026-01-17T00:02:41.727559147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:41.730197 containerd[1598]: time="2026-01-17T00:02:41.727582387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:41.730197 containerd[1598]: time="2026-01-17T00:02:41.727722508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:41.746520 containerd[1598]: time="2026-01-17T00:02:41.740248452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:41.746520 containerd[1598]: time="2026-01-17T00:02:41.740306172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:41.746520 containerd[1598]: time="2026-01-17T00:02:41.740318732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:41.746520 containerd[1598]: time="2026-01-17T00:02:41.740407653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:41.858537 containerd[1598]: time="2026-01-17T00:02:41.858489883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6qrn,Uid:15e4afb8-b39e-4e45-a7e6-9435dc06badb,Namespace:kube-system,Attempt:0,} returns sandbox id \"512c80e90176c16f92ca765579f3e8fc38285af1dd5b9338f95e0aee20aeefa1\"" Jan 17 00:02:41.864219 containerd[1598]: time="2026-01-17T00:02:41.863980454Z" level=info msg="CreateContainer within sandbox \"512c80e90176c16f92ca765579f3e8fc38285af1dd5b9338f95e0aee20aeefa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:41.868936 containerd[1598]: time="2026-01-17T00:02:41.868711423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxpkc,Uid:580c7c8d-d74f-4d5b-abdd-0990fa8818ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"30f74aed8eebbbda5f1abc5dad9a97e6aee7f56e4adc896879201d2cf8d515bf\"" Jan 17 00:02:41.873643 containerd[1598]: time="2026-01-17T00:02:41.873575753Z" level=info msg="CreateContainer within sandbox \"30f74aed8eebbbda5f1abc5dad9a97e6aee7f56e4adc896879201d2cf8d515bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:41.890079 containerd[1598]: time="2026-01-17T00:02:41.890037825Z" level=info msg="CreateContainer within sandbox \"512c80e90176c16f92ca765579f3e8fc38285af1dd5b9338f95e0aee20aeefa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5eede8af7fb8bb3ebf94406df03ce92656e7c8597473386deb7a2d489eb84ee7\"" Jan 17 00:02:41.892219 containerd[1598]: time="2026-01-17T00:02:41.891407947Z" level=info msg="StartContainer for \"5eede8af7fb8bb3ebf94406df03ce92656e7c8597473386deb7a2d489eb84ee7\"" Jan 17 00:02:41.892325 containerd[1598]: time="2026-01-17T00:02:41.892300509Z" level=info msg="CreateContainer within sandbox \"30f74aed8eebbbda5f1abc5dad9a97e6aee7f56e4adc896879201d2cf8d515bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a81c45f43e23cbe5f51899790ff93d0a5e3670339ee226539690934bf85eebd9\"" Jan 17 00:02:41.892784 containerd[1598]: time="2026-01-17T00:02:41.892743390Z" level=info msg="StartContainer for \"a81c45f43e23cbe5f51899790ff93d0a5e3670339ee226539690934bf85eebd9\"" Jan 17 00:02:41.967535 containerd[1598]: time="2026-01-17T00:02:41.966886535Z" level=info msg="StartContainer for \"5eede8af7fb8bb3ebf94406df03ce92656e7c8597473386deb7a2d489eb84ee7\" returns successfully" Jan 17 00:02:41.975288 containerd[1598]: time="2026-01-17T00:02:41.975147111Z" level=info msg="StartContainer for \"a81c45f43e23cbe5f51899790ff93d0a5e3670339ee226539690934bf85eebd9\" returns successfully" Jan 17 00:02:42.087176 kubelet[2777]: I0117 00:02:42.084829 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zxpkc" podStartSLOduration=19.084803062 podStartE2EDuration="19.084803062s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:42.083659232 +0000 UTC m=+25.285116542" watchObservedRunningTime="2026-01-17 00:02:42.084803062 +0000 UTC m=+25.286260292" Jan 17 00:02:42.131855 kubelet[2777]: I0117 00:02:42.131663 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s6qrn" podStartSLOduration=19.131646442 podStartE2EDuration="19.131646442s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:42.107234526 +0000 UTC m=+25.308691676" watchObservedRunningTime="2026-01-17 00:02:42.131646442 +0000 UTC m=+25.333103632" Jan 17 00:02:42.748849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237617845.mount: Deactivated successfully. Jan 17 00:03:29.732193 systemd[1]: sshd@1-46.224.97.13:22-37.153.194.108:54126.service: Deactivated successfully. Jan 17 00:04:35.432585 systemd[1]: Started sshd@10-46.224.97.13:22-4.153.228.146:42616.service - OpenSSH per-connection server daemon (4.153.228.146:42616). Jan 17 00:04:36.029965 sshd[4161]: Accepted publickey for core from 4.153.228.146 port 42616 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:36.030948 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:36.035550 systemd-logind[1582]: New session 8 of user core. Jan 17 00:04:36.044651 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:04:36.548252 sshd[4161]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:36.552737 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:04:36.552785 systemd[1]: sshd@10-46.224.97.13:22-4.153.228.146:42616.service: Deactivated successfully. Jan 17 00:04:36.557836 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:04:36.558970 systemd-logind[1582]: Removed session 8. Jan 17 00:04:41.656636 systemd[1]: Started sshd@11-46.224.97.13:22-4.153.228.146:42630.service - OpenSSH per-connection server daemon (4.153.228.146:42630). Jan 17 00:04:42.267377 sshd[4176]: Accepted publickey for core from 4.153.228.146 port 42630 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:42.269770 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:42.274398 systemd-logind[1582]: New session 9 of user core. Jan 17 00:04:42.281659 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:04:42.775166 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:42.780635 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:04:42.781178 systemd[1]: sshd@11-46.224.97.13:22-4.153.228.146:42630.service: Deactivated successfully. Jan 17 00:04:42.784713 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:04:42.787189 systemd-logind[1582]: Removed session 9. Jan 17 00:04:47.885498 systemd[1]: Started sshd@12-46.224.97.13:22-4.153.228.146:34694.service - OpenSSH per-connection server daemon (4.153.228.146:34694). Jan 17 00:04:48.491210 sshd[4191]: Accepted publickey for core from 4.153.228.146 port 34694 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:48.493318 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:48.499800 systemd-logind[1582]: New session 10 of user core. Jan 17 00:04:48.506643 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:04:48.988683 sshd[4191]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:48.997790 systemd[1]: sshd@12-46.224.97.13:22-4.153.228.146:34694.service: Deactivated successfully. Jan 17 00:04:48.997861 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:04:49.000454 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:04:49.003172 systemd-logind[1582]: Removed session 10. Jan 17 00:04:49.098602 systemd[1]: Started sshd@13-46.224.97.13:22-4.153.228.146:34708.service - OpenSSH per-connection server daemon (4.153.228.146:34708). Jan 17 00:04:49.722180 sshd[4206]: Accepted publickey for core from 4.153.228.146 port 34708 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:49.723908 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:49.731100 systemd-logind[1582]: New session 11 of user core. Jan 17 00:04:49.737482 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:04:50.275593 systemd[1]: Started sshd@14-46.224.97.13:22-121.142.146.165:60565.service - OpenSSH per-connection server daemon (121.142.146.165:60565). Jan 17 00:04:50.290008 sshd[4206]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:50.294960 systemd[1]: sshd@13-46.224.97.13:22-4.153.228.146:34708.service: Deactivated successfully. Jan 17 00:04:50.298942 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:04:50.299231 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:04:50.301453 systemd-logind[1582]: Removed session 11. Jan 17 00:04:50.387392 systemd[1]: Started sshd@15-46.224.97.13:22-4.153.228.146:34722.service - OpenSSH per-connection server daemon (4.153.228.146:34722). Jan 17 00:04:50.991155 sshd[4219]: Accepted publickey for core from 4.153.228.146 port 34722 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:50.993557 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:50.999874 systemd-logind[1582]: New session 12 of user core. Jan 17 00:04:51.004475 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:04:51.488510 sshd[4219]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:51.493650 systemd[1]: sshd@15-46.224.97.13:22-4.153.228.146:34722.service: Deactivated successfully. Jan 17 00:04:51.500007 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:04:51.500982 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:04:51.502046 systemd-logind[1582]: Removed session 12. Jan 17 00:04:53.426646 sshd[4215]: Invalid user support from 121.142.146.165 port 60565 Jan 17 00:04:54.428222 sshd[4235]: pam_faillock(sshd:auth): User unknown Jan 17 00:04:54.431308 sshd[4215]: Postponed keyboard-interactive for invalid user support from 121.142.146.165 port 60565 ssh2 [preauth] Jan 17 00:04:55.149884 sshd[4235]: pam_unix(sshd:auth): check pass; user unknown Jan 17 00:04:55.149939 sshd[4235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.142.146.165 Jan 17 00:04:55.150806 sshd[4235]: pam_faillock(sshd:auth): User unknown Jan 17 00:04:56.600552 systemd[1]: Started sshd@16-46.224.97.13:22-4.153.228.146:56028.service - OpenSSH per-connection server daemon (4.153.228.146:56028). Jan 17 00:04:56.646455 sshd[4215]: PAM: Permission denied for illegal user support from 121.142.146.165 Jan 17 00:04:56.647249 sshd[4215]: Failed keyboard-interactive/pam for invalid user support from 121.142.146.165 port 60565 ssh2 Jan 17 00:04:57.241440 sshd[4236]: Accepted publickey for core from 4.153.228.146 port 56028 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:04:57.243357 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:57.249078 systemd-logind[1582]: New session 13 of user core. Jan 17 00:04:57.259750 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:04:57.487269 sshd[4215]: Connection closed by invalid user support 121.142.146.165 port 60565 [preauth] Jan 17 00:04:57.491471 systemd[1]: sshd@14-46.224.97.13:22-121.142.146.165:60565.service: Deactivated successfully. Jan 17 00:04:57.755383 sshd[4236]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:57.762803 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:04:57.766483 systemd[1]: sshd@16-46.224.97.13:22-4.153.228.146:56028.service: Deactivated successfully. Jan 17 00:04:57.771240 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:04:57.774561 systemd-logind[1582]: Removed session 13. Jan 17 00:05:02.855875 systemd[1]: Started sshd@17-46.224.97.13:22-4.153.228.146:56030.service - OpenSSH per-connection server daemon (4.153.228.146:56030). Jan 17 00:05:03.451645 sshd[4253]: Accepted publickey for core from 4.153.228.146 port 56030 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:03.453589 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:03.458633 systemd-logind[1582]: New session 14 of user core. Jan 17 00:05:03.464580 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:05:03.959460 sshd[4253]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:03.964035 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:05:03.965568 systemd[1]: sshd@17-46.224.97.13:22-4.153.228.146:56030.service: Deactivated successfully. Jan 17 00:05:03.968586 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:05:03.971147 systemd-logind[1582]: Removed session 14. Jan 17 00:05:04.069584 systemd[1]: Started sshd@18-46.224.97.13:22-4.153.228.146:56036.service - OpenSSH per-connection server daemon (4.153.228.146:56036). Jan 17 00:05:04.685931 sshd[4267]: Accepted publickey for core from 4.153.228.146 port 56036 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:04.688940 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:04.696894 systemd-logind[1582]: New session 15 of user core. Jan 17 00:05:04.701684 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:05:05.276997 sshd[4267]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:05.282411 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:05:05.283446 systemd[1]: sshd@18-46.224.97.13:22-4.153.228.146:56036.service: Deactivated successfully. Jan 17 00:05:05.286465 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:05:05.288828 systemd-logind[1582]: Removed session 15. Jan 17 00:05:05.379427 systemd[1]: Started sshd@19-46.224.97.13:22-4.153.228.146:51532.service - OpenSSH per-connection server daemon (4.153.228.146:51532). Jan 17 00:05:05.986615 sshd[4279]: Accepted publickey for core from 4.153.228.146 port 51532 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:05.988767 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:05.994646 systemd-logind[1582]: New session 16 of user core. Jan 17 00:05:05.998424 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:05:06.931029 sshd[4279]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:06.937576 systemd[1]: sshd@19-46.224.97.13:22-4.153.228.146:51532.service: Deactivated successfully. Jan 17 00:05:06.942830 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:05:06.945200 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:05:06.947081 systemd-logind[1582]: Removed session 16. Jan 17 00:05:07.044393 systemd[1]: Started sshd@20-46.224.97.13:22-4.153.228.146:51536.service - OpenSSH per-connection server daemon (4.153.228.146:51536). Jan 17 00:05:07.692393 sshd[4298]: Accepted publickey for core from 4.153.228.146 port 51536 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:07.694514 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:07.699341 systemd-logind[1582]: New session 17 of user core. Jan 17 00:05:07.706454 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:05:08.335449 sshd[4298]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:08.341299 systemd[1]: sshd@20-46.224.97.13:22-4.153.228.146:51536.service: Deactivated successfully. Jan 17 00:05:08.345548 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:05:08.346845 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:05:08.348359 systemd-logind[1582]: Removed session 17. Jan 17 00:05:08.440706 systemd[1]: Started sshd@21-46.224.97.13:22-4.153.228.146:51544.service - OpenSSH per-connection server daemon (4.153.228.146:51544). Jan 17 00:05:09.078391 sshd[4310]: Accepted publickey for core from 4.153.228.146 port 51544 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:09.080449 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:09.087200 systemd-logind[1582]: New session 18 of user core. Jan 17 00:05:09.089454 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:05:09.593231 sshd[4310]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:09.598793 systemd[1]: sshd@21-46.224.97.13:22-4.153.228.146:51544.service: Deactivated successfully. Jan 17 00:05:09.604995 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:05:09.606498 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:05:09.608415 systemd-logind[1582]: Removed session 18. Jan 17 00:05:14.697595 systemd[1]: Started sshd@22-46.224.97.13:22-4.153.228.146:39034.service - OpenSSH per-connection server daemon (4.153.228.146:39034). Jan 17 00:05:15.311930 sshd[4326]: Accepted publickey for core from 4.153.228.146 port 39034 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:15.313916 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:15.320200 systemd-logind[1582]: New session 19 of user core. Jan 17 00:05:15.328538 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:05:15.817162 sshd[4326]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:15.822756 systemd[1]: sshd@22-46.224.97.13:22-4.153.228.146:39034.service: Deactivated successfully. Jan 17 00:05:15.828158 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:05:15.829096 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:05:15.829944 systemd-logind[1582]: Removed session 19. Jan 17 00:05:17.817730 systemd[1]: Started sshd@23-46.224.97.13:22-222.124.250.234:37034.service - OpenSSH per-connection server daemon (222.124.250.234:37034). Jan 17 00:05:18.987438 sshd[4342]: Received disconnect from 222.124.250.234 port 37034:11: Bye Bye [preauth] Jan 17 00:05:18.987438 sshd[4342]: Disconnected from authenticating user root 222.124.250.234 port 37034 [preauth] Jan 17 00:05:18.990161 systemd[1]: sshd@23-46.224.97.13:22-222.124.250.234:37034.service: Deactivated successfully. Jan 17 00:05:20.929857 systemd[1]: Started sshd@24-46.224.97.13:22-4.153.228.146:39042.service - OpenSSH per-connection server daemon (4.153.228.146:39042). Jan 17 00:05:21.542931 sshd[4347]: Accepted publickey for core from 4.153.228.146 port 39042 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:21.545324 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:21.550710 systemd-logind[1582]: New session 20 of user core. Jan 17 00:05:21.557760 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:05:22.050426 sshd[4347]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:22.057567 systemd[1]: sshd@24-46.224.97.13:22-4.153.228.146:39042.service: Deactivated successfully. Jan 17 00:05:22.061196 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:05:22.063272 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:05:22.064597 systemd-logind[1582]: Removed session 20. Jan 17 00:05:26.168478 systemd[1]: Started sshd@25-46.224.97.13:22-101.47.160.237:33256.service - OpenSSH per-connection server daemon (101.47.160.237:33256). Jan 17 00:05:27.167642 systemd[1]: Started sshd@26-46.224.97.13:22-4.153.228.146:46970.service - OpenSSH per-connection server daemon (4.153.228.146:46970). Jan 17 00:05:27.222775 sshd[4363]: Invalid user grid from 101.47.160.237 port 33256 Jan 17 00:05:27.416452 sshd[4363]: Received disconnect from 101.47.160.237 port 33256:11: Bye Bye [preauth] Jan 17 00:05:27.416452 sshd[4363]: Disconnected from invalid user grid 101.47.160.237 port 33256 [preauth] Jan 17 00:05:27.422364 systemd[1]: sshd@25-46.224.97.13:22-101.47.160.237:33256.service: Deactivated successfully. Jan 17 00:05:27.796718 sshd[4365]: Accepted publickey for core from 4.153.228.146 port 46970 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:27.798794 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:27.804506 systemd-logind[1582]: New session 21 of user core. Jan 17 00:05:27.808382 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:05:28.314412 sshd[4365]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:28.320319 systemd[1]: sshd@26-46.224.97.13:22-4.153.228.146:46970.service: Deactivated successfully. Jan 17 00:05:28.326509 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:05:28.327897 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:05:28.329050 systemd-logind[1582]: Removed session 21. Jan 17 00:05:28.409512 systemd[1]: Started sshd@27-46.224.97.13:22-4.153.228.146:46978.service - OpenSSH per-connection server daemon (4.153.228.146:46978). Jan 17 00:05:29.009274 sshd[4382]: Accepted publickey for core from 4.153.228.146 port 46978 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:29.012089 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:29.020447 systemd-logind[1582]: New session 22 of user core. Jan 17 00:05:29.027462 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:05:30.963588 containerd[1598]: time="2026-01-17T00:05:30.963533145Z" level=info msg="StopContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" with timeout 30 (s)" Jan 17 00:05:30.964358 containerd[1598]: time="2026-01-17T00:05:30.964257829Z" level=info msg="Stop container \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" with signal terminated" Jan 17 00:05:30.985670 containerd[1598]: time="2026-01-17T00:05:30.982566653Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:05:30.991920 containerd[1598]: time="2026-01-17T00:05:30.991799985Z" level=info msg="StopContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" with timeout 2 (s)" Jan 17 00:05:30.992374 containerd[1598]: time="2026-01-17T00:05:30.992345988Z" level=info msg="Stop container \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" with signal terminated" Jan 17 00:05:31.002814 systemd-networkd[1244]: lxc_health: Link DOWN Jan 17 00:05:31.002822 systemd-networkd[1244]: lxc_health: Lost carrier Jan 17 00:05:31.030557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a-rootfs.mount: Deactivated successfully. Jan 17 00:05:31.039090 containerd[1598]: time="2026-01-17T00:05:31.038873089Z" level=info msg="shim disconnected" id=664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a namespace=k8s.io Jan 17 00:05:31.039090 containerd[1598]: time="2026-01-17T00:05:31.038951250Z" level=warning msg="cleaning up after shim disconnected" id=664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a namespace=k8s.io Jan 17 00:05:31.039090 containerd[1598]: time="2026-01-17T00:05:31.038962010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:31.044378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38-rootfs.mount: Deactivated successfully. Jan 17 00:05:31.053846 containerd[1598]: time="2026-01-17T00:05:31.053669452Z" level=info msg="shim disconnected" id=da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38 namespace=k8s.io Jan 17 00:05:31.053846 containerd[1598]: time="2026-01-17T00:05:31.053737893Z" level=warning msg="cleaning up after shim disconnected" id=da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38 namespace=k8s.io Jan 17 00:05:31.053846 containerd[1598]: time="2026-01-17T00:05:31.053748213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:31.071073 containerd[1598]: time="2026-01-17T00:05:31.071022550Z" level=info msg="StopContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" returns successfully" Jan 17 00:05:31.072296 containerd[1598]: time="2026-01-17T00:05:31.071917955Z" level=info msg="StopPodSandbox for \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\"" Jan 17 00:05:31.072296 containerd[1598]: time="2026-01-17T00:05:31.072153996Z" level=info msg="Container to stop \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.075394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b-shm.mount: Deactivated successfully. Jan 17 00:05:31.078500 containerd[1598]: time="2026-01-17T00:05:31.078449111Z" level=info msg="StopContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" returns successfully" Jan 17 00:05:31.079191 containerd[1598]: time="2026-01-17T00:05:31.079164075Z" level=info msg="StopPodSandbox for \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\"" Jan 17 00:05:31.079289 containerd[1598]: time="2026-01-17T00:05:31.079211035Z" level=info msg="Container to stop \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.079289 containerd[1598]: time="2026-01-17T00:05:31.079223236Z" level=info msg="Container to stop \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.079289 containerd[1598]: time="2026-01-17T00:05:31.079232396Z" level=info msg="Container to stop \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.079289 containerd[1598]: time="2026-01-17T00:05:31.079242036Z" level=info msg="Container to stop \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.079289 containerd[1598]: time="2026-01-17T00:05:31.079253156Z" level=info msg="Container to stop \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:05:31.081757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374-shm.mount: Deactivated successfully. Jan 17 00:05:31.123777 containerd[1598]: time="2026-01-17T00:05:31.123711445Z" level=info msg="shim disconnected" id=9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b namespace=k8s.io Jan 17 00:05:31.123777 containerd[1598]: time="2026-01-17T00:05:31.123772726Z" level=warning msg="cleaning up after shim disconnected" id=9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b namespace=k8s.io Jan 17 00:05:31.123777 containerd[1598]: time="2026-01-17T00:05:31.123782006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:31.124106 containerd[1598]: time="2026-01-17T00:05:31.124035767Z" level=info msg="shim disconnected" id=244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374 namespace=k8s.io Jan 17 00:05:31.124106 containerd[1598]: time="2026-01-17T00:05:31.124086447Z" level=warning msg="cleaning up after shim disconnected" id=244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374 namespace=k8s.io Jan 17 00:05:31.124106 containerd[1598]: time="2026-01-17T00:05:31.124095127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:31.140852 containerd[1598]: time="2026-01-17T00:05:31.140791501Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:05:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:05:31.141361 containerd[1598]: time="2026-01-17T00:05:31.141166023Z" level=info msg="TearDown network for sandbox \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" successfully" Jan 17 00:05:31.141361 containerd[1598]: time="2026-01-17T00:05:31.141353104Z" level=info msg="StopPodSandbox for \"244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374\" returns successfully" Jan 17 00:05:31.144258 containerd[1598]: time="2026-01-17T00:05:31.144143120Z" level=info msg="TearDown network for sandbox \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\" successfully" Jan 17 00:05:31.144258 containerd[1598]: time="2026-01-17T00:05:31.144180520Z" level=info msg="StopPodSandbox for \"9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b\" returns successfully" Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280210 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d4650a2-a476-48bb-a110-da6f904d041b-cilium-config-path\") pod \"5d4650a2-a476-48bb-a110-da6f904d041b\" (UID: \"5d4650a2-a476-48bb-a110-da6f904d041b\") " Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280282 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbhp\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-kube-api-access-8pbhp\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280308 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-cgroup\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280331 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-lib-modules\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280357 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-etc-cni-netd\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282202 kubelet[2777]: I0117 00:05:31.280378 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-net\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280404 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-clustermesh-secrets\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280427 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-run\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280447 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-kernel\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280501 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hubble-tls\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280522 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-xtables-lock\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.282821 kubelet[2777]: I0117 00:05:31.280545 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hostproc\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.280569 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-config-path\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.280590 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-bpf-maps\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.280614 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2p99\" (UniqueName: \"kubernetes.io/projected/5d4650a2-a476-48bb-a110-da6f904d041b-kube-api-access-t2p99\") pod \"5d4650a2-a476-48bb-a110-da6f904d041b\" (UID: \"5d4650a2-a476-48bb-a110-da6f904d041b\") " Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.280639 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cni-path\") pod \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\" (UID: \"d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16\") " Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.280733 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cni-path" (OuterVolumeSpecName: "cni-path") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.283056 kubelet[2777]: I0117 00:05:31.281224 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.285622 kubelet[2777]: I0117 00:05:31.283908 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.285622 kubelet[2777]: I0117 00:05:31.284144 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.285622 kubelet[2777]: I0117 00:05:31.284185 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.285622 kubelet[2777]: I0117 00:05:31.284210 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.285622 kubelet[2777]: I0117 00:05:31.284300 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.286594 kubelet[2777]: I0117 00:05:31.286393 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.286787 kubelet[2777]: I0117 00:05:31.286533 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hostproc" (OuterVolumeSpecName: "hostproc") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.286898 kubelet[2777]: I0117 00:05:31.286878 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:05:31.289435 kubelet[2777]: I0117 00:05:31.289381 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4650a2-a476-48bb-a110-da6f904d041b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d4650a2-a476-48bb-a110-da6f904d041b" (UID: "5d4650a2-a476-48bb-a110-da6f904d041b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:05:31.289526 kubelet[2777]: I0117 00:05:31.289510 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-kube-api-access-8pbhp" (OuterVolumeSpecName: "kube-api-access-8pbhp") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "kube-api-access-8pbhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:31.291875 kubelet[2777]: I0117 00:05:31.291833 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:05:31.292092 kubelet[2777]: I0117 00:05:31.292065 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4650a2-a476-48bb-a110-da6f904d041b-kube-api-access-t2p99" (OuterVolumeSpecName: "kube-api-access-t2p99") pod "5d4650a2-a476-48bb-a110-da6f904d041b" (UID: "5d4650a2-a476-48bb-a110-da6f904d041b"). InnerVolumeSpecName "kube-api-access-t2p99". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:31.293032 kubelet[2777]: I0117 00:05:31.293001 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:05:31.293246 kubelet[2777]: I0117 00:05:31.293226 2777 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" (UID: "d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:05:31.381808 kubelet[2777]: I0117 00:05:31.381718 2777 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hubble-tls\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.381808 kubelet[2777]: I0117 00:05:31.381790 2777 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-hostproc\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381813 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-config-path\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381847 2777 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-xtables-lock\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381867 2777 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-bpf-maps\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381885 2777 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cni-path\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381916 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2p99\" (UniqueName: \"kubernetes.io/projected/5d4650a2-a476-48bb-a110-da6f904d041b-kube-api-access-t2p99\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381939 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pbhp\" (UniqueName: \"kubernetes.io/projected/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-kube-api-access-8pbhp\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381959 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d4650a2-a476-48bb-a110-da6f904d041b-cilium-config-path\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382065 kubelet[2777]: I0117 00:05:31.381999 2777 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-lib-modules\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382017 2777 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-etc-cni-netd\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382030 2777 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-net\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382047 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-cgroup\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382063 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-cilium-run\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382098 2777 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-clustermesh-secrets\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.382400 kubelet[2777]: I0117 00:05:31.382142 2777 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-ce65c18e74\" DevicePath \"\"" Jan 17 00:05:31.504576 kubelet[2777]: I0117 00:05:31.502933 2777 scope.go:117] "RemoveContainer" containerID="664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a" Jan 17 00:05:31.506259 containerd[1598]: time="2026-01-17T00:05:31.506198111Z" level=info msg="RemoveContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\"" Jan 17 00:05:31.511408 containerd[1598]: time="2026-01-17T00:05:31.511143659Z" level=info msg="RemoveContainer for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" returns successfully" Jan 17 00:05:31.511509 kubelet[2777]: I0117 00:05:31.511406 2777 scope.go:117] "RemoveContainer" containerID="664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a" Jan 17 00:05:31.511756 containerd[1598]: time="2026-01-17T00:05:31.511720782Z" level=error msg="ContainerStatus for \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\": not found" Jan 17 00:05:31.512484 kubelet[2777]: E0117 00:05:31.512428 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\": not found" containerID="664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a" Jan 17 00:05:31.512578 kubelet[2777]: I0117 00:05:31.512469 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a"} err="failed to get container status \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\": rpc error: code = NotFound desc = an error occurred when try to find container \"664582199f216e65f7d843b59d46b38966734b41fc21563da88824ca79a4664a\": not found" Jan 17 00:05:31.512578 kubelet[2777]: I0117 00:05:31.512545 2777 scope.go:117] "RemoveContainer" containerID="da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38" Jan 17 00:05:31.515824 containerd[1598]: time="2026-01-17T00:05:31.515782365Z" level=info msg="RemoveContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\"" Jan 17 00:05:31.521615 containerd[1598]: time="2026-01-17T00:05:31.521564037Z" level=info msg="RemoveContainer for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" returns successfully" Jan 17 00:05:31.521843 kubelet[2777]: I0117 00:05:31.521818 2777 scope.go:117] "RemoveContainer" containerID="3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c" Jan 17 00:05:31.523781 containerd[1598]: time="2026-01-17T00:05:31.523502208Z" level=info msg="RemoveContainer for \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\"" Jan 17 00:05:31.528091 containerd[1598]: time="2026-01-17T00:05:31.528047954Z" level=info msg="RemoveContainer for \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\" returns successfully" Jan 17 00:05:31.531614 kubelet[2777]: I0117 00:05:31.531387 2777 scope.go:117] "RemoveContainer" containerID="95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad" Jan 17 00:05:31.537397 containerd[1598]: time="2026-01-17T00:05:31.536828803Z" level=info msg="RemoveContainer for \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\"" Jan 17 00:05:31.545728 containerd[1598]: time="2026-01-17T00:05:31.545550212Z" level=info msg="RemoveContainer for \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\" returns successfully" Jan 17 00:05:31.546325 kubelet[2777]: I0117 00:05:31.546203 2777 scope.go:117] "RemoveContainer" containerID="d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19" Jan 17 00:05:31.547720 containerd[1598]: time="2026-01-17T00:05:31.547666104Z" level=info msg="RemoveContainer for \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\"" Jan 17 00:05:31.552057 containerd[1598]: time="2026-01-17T00:05:31.551967008Z" level=info msg="RemoveContainer for \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\" returns successfully" Jan 17 00:05:31.552228 kubelet[2777]: I0117 00:05:31.552202 2777 scope.go:117] "RemoveContainer" containerID="da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362" Jan 17 00:05:31.553575 containerd[1598]: time="2026-01-17T00:05:31.553334936Z" level=info msg="RemoveContainer for \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\"" Jan 17 00:05:31.556332 containerd[1598]: time="2026-01-17T00:05:31.556300512Z" level=info msg="RemoveContainer for \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\" returns successfully" Jan 17 00:05:31.556640 kubelet[2777]: I0117 00:05:31.556618 2777 scope.go:117] "RemoveContainer" containerID="da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38" Jan 17 00:05:31.556849 containerd[1598]: time="2026-01-17T00:05:31.556817715Z" level=error msg="ContainerStatus for \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\": not found" Jan 17 00:05:31.556973 kubelet[2777]: E0117 00:05:31.556946 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\": not found" containerID="da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38" Jan 17 00:05:31.557028 kubelet[2777]: I0117 00:05:31.557001 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38"} err="failed to get container status \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\": rpc error: code = NotFound desc = an error occurred when try to find container \"da03e6152ca2608aa2858e1dae08b62b55a4dc2ff1bee55486897cbf4b7e3b38\": not found" Jan 17 00:05:31.557028 kubelet[2777]: I0117 00:05:31.557026 2777 scope.go:117] "RemoveContainer" containerID="3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c" Jan 17 00:05:31.557392 containerd[1598]: time="2026-01-17T00:05:31.557360318Z" level=error msg="ContainerStatus for \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\": not found" Jan 17 00:05:31.557598 kubelet[2777]: E0117 00:05:31.557575 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\": not found" containerID="3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c" Jan 17 00:05:31.557642 kubelet[2777]: I0117 00:05:31.557622 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c"} err="failed to get container status \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e6056ae832c6b0be9e32f7ba3da7f9732bce7df1b8ba6371eecdb104c8a005c\": not found" Jan 17 00:05:31.557677 kubelet[2777]: I0117 00:05:31.557645 2777 scope.go:117] "RemoveContainer" containerID="95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad" Jan 17 00:05:31.557857 containerd[1598]: time="2026-01-17T00:05:31.557825801Z" level=error msg="ContainerStatus for \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\": not found" Jan 17 00:05:31.558012 kubelet[2777]: E0117 00:05:31.557935 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\": not found" containerID="95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad" Jan 17 00:05:31.558012 kubelet[2777]: I0117 00:05:31.557960 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad"} err="failed to get container status \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\": rpc error: code = NotFound desc = an error occurred when try to find container \"95486d2434670671b62c639a30fe395402365dfe659546df042de37c53d78fad\": not found" Jan 17 00:05:31.558012 kubelet[2777]: I0117 00:05:31.557977 2777 scope.go:117] "RemoveContainer" containerID="d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19" Jan 17 00:05:31.558249 containerd[1598]: time="2026-01-17T00:05:31.558221643Z" level=error msg="ContainerStatus for \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\": not found" Jan 17 00:05:31.558407 kubelet[2777]: E0117 00:05:31.558388 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\": not found" containerID="d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19" Jan 17 00:05:31.558443 kubelet[2777]: I0117 00:05:31.558415 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19"} err="failed to get container status \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6a588b5526c6f27594ff52969f8c9988521e38a811c44702b11bf558668ec19\": not found" Jan 17 00:05:31.558443 kubelet[2777]: I0117 00:05:31.558437 2777 scope.go:117] "RemoveContainer" containerID="da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362" Jan 17 00:05:31.558695 containerd[1598]: time="2026-01-17T00:05:31.558665126Z" level=error msg="ContainerStatus for \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\": not found" Jan 17 00:05:31.558807 kubelet[2777]: E0117 00:05:31.558787 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\": not found" containerID="da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362" Jan 17 00:05:31.558839 kubelet[2777]: I0117 00:05:31.558811 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362"} err="failed to get container status \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\": rpc error: code = NotFound desc = an error occurred when try to find container \"da44aa56f41df8815d28cd0c53af8cfb36564cab7ed44991c38d21472bc74362\": not found" Jan 17 00:05:31.951936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ab739e8f5e7bdcfb6dbf6ef64760c5bc9807ae552523ae95532e53672c53b4b-rootfs.mount: Deactivated successfully. Jan 17 00:05:31.952436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244eb9a5d0a57b74dd7be650b4f2c5297ab8ddf152cc970da227e0f706447374-rootfs.mount: Deactivated successfully. Jan 17 00:05:31.952621 systemd[1]: var-lib-kubelet-pods-5d4650a2\x2da476\x2d48bb\x2da110\x2dda6f904d041b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2p99.mount: Deactivated successfully. Jan 17 00:05:31.952729 systemd[1]: var-lib-kubelet-pods-d85a4e8e\x2d7ce4\x2d49d3\x2db8f4\x2dc6b1960b6c16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pbhp.mount: Deactivated successfully. Jan 17 00:05:31.952810 systemd[1]: var-lib-kubelet-pods-d85a4e8e\x2d7ce4\x2d49d3\x2db8f4\x2dc6b1960b6c16-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:05:31.952890 systemd[1]: var-lib-kubelet-pods-d85a4e8e\x2d7ce4\x2d49d3\x2db8f4\x2dc6b1960b6c16-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:05:32.061528 kubelet[2777]: E0117 00:05:32.061456 2777 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:05:32.919095 kubelet[2777]: I0117 00:05:32.918374 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4650a2-a476-48bb-a110-da6f904d041b" path="/var/lib/kubelet/pods/5d4650a2-a476-48bb-a110-da6f904d041b/volumes" Jan 17 00:05:32.919095 kubelet[2777]: I0117 00:05:32.918880 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" path="/var/lib/kubelet/pods/d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16/volumes" Jan 17 00:05:32.961813 sshd[4382]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:32.968451 systemd[1]: sshd@27-46.224.97.13:22-4.153.228.146:46978.service: Deactivated successfully. Jan 17 00:05:32.973144 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:05:32.974103 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:05:32.975691 systemd-logind[1582]: Removed session 22. Jan 17 00:05:33.066514 systemd[1]: Started sshd@28-46.224.97.13:22-4.153.228.146:46990.service - OpenSSH per-connection server daemon (4.153.228.146:46990). Jan 17 00:05:33.662475 sshd[4549]: Accepted publickey for core from 4.153.228.146 port 46990 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:33.665296 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:33.671141 systemd-logind[1582]: New session 23 of user core. Jan 17 00:05:33.677806 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:05:35.575455 kubelet[2777]: I0117 00:05:35.571805 2777 memory_manager.go:355] "RemoveStaleState removing state" podUID="d85a4e8e-7ce4-49d3-b8f4-c6b1960b6c16" containerName="cilium-agent" Jan 17 00:05:35.575455 kubelet[2777]: I0117 00:05:35.571841 2777 memory_manager.go:355] "RemoveStaleState removing state" podUID="5d4650a2-a476-48bb-a110-da6f904d041b" containerName="cilium-operator" Jan 17 00:05:35.612247 sshd[4549]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:35.615486 kubelet[2777]: I0117 00:05:35.614187 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-cni-path\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.615486 kubelet[2777]: I0117 00:05:35.615242 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-cilium-ipsec-secrets\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.615486 kubelet[2777]: I0117 00:05:35.615424 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-cilium-run\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.615486 kubelet[2777]: I0117 00:05:35.615441 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-lib-modules\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.615486 kubelet[2777]: I0117 00:05:35.615457 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwqv9\" (UniqueName: \"kubernetes.io/projected/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-kube-api-access-wwqv9\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616195 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-xtables-lock\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616268 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-host-proc-sys-net\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616288 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-bpf-maps\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616322 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-etc-cni-netd\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616339 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-cilium-config-path\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617278 kubelet[2777]: I0117 00:05:35.616354 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-hubble-tls\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617494 kubelet[2777]: I0117 00:05:35.616402 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-cilium-cgroup\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617494 kubelet[2777]: I0117 00:05:35.616418 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-clustermesh-secrets\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617494 kubelet[2777]: I0117 00:05:35.616434 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-host-proc-sys-kernel\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.617494 kubelet[2777]: I0117 00:05:35.616478 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3adf4dc-ad5f-485d-abea-1d0c929f1abe-hostproc\") pod \"cilium-299dn\" (UID: \"f3adf4dc-ad5f-485d-abea-1d0c929f1abe\") " pod="kube-system/cilium-299dn" Jan 17 00:05:35.621510 systemd[1]: sshd@28-46.224.97.13:22-4.153.228.146:46990.service: Deactivated successfully. Jan 17 00:05:35.630828 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:05:35.631782 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:05:35.634640 systemd-logind[1582]: Removed session 23. Jan 17 00:05:35.714423 systemd[1]: Started sshd@29-46.224.97.13:22-4.153.228.146:50502.service - OpenSSH per-connection server daemon (4.153.228.146:50502). Jan 17 00:05:35.886096 containerd[1598]: time="2026-01-17T00:05:35.885964324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-299dn,Uid:f3adf4dc-ad5f-485d-abea-1d0c929f1abe,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:35.912143 containerd[1598]: time="2026-01-17T00:05:35.911648784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:35.912143 containerd[1598]: time="2026-01-17T00:05:35.911703865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:35.912143 containerd[1598]: time="2026-01-17T00:05:35.911714785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:35.912143 containerd[1598]: time="2026-01-17T00:05:35.911847785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:35.915851 kubelet[2777]: E0117 00:05:35.915496 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zxpkc" podUID="580c7c8d-d74f-4d5b-abdd-0990fa8818ed" Jan 17 00:05:35.963862 containerd[1598]: time="2026-01-17T00:05:35.963756830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-299dn,Uid:f3adf4dc-ad5f-485d-abea-1d0c929f1abe,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\"" Jan 17 00:05:35.967765 containerd[1598]: time="2026-01-17T00:05:35.967666211Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:05:35.979056 containerd[1598]: time="2026-01-17T00:05:35.978913873Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebc2f5e1e8e415f3f9792e5a38abe1f468f9f8ce21432da64c4852a52e4abae4\"" Jan 17 00:05:35.982186 containerd[1598]: time="2026-01-17T00:05:35.980696282Z" level=info msg="StartContainer for \"ebc2f5e1e8e415f3f9792e5a38abe1f468f9f8ce21432da64c4852a52e4abae4\"" Jan 17 00:05:36.034807 containerd[1598]: time="2026-01-17T00:05:36.034759897Z" level=info msg="StartContainer for \"ebc2f5e1e8e415f3f9792e5a38abe1f468f9f8ce21432da64c4852a52e4abae4\" returns successfully" Jan 17 00:05:36.087542 containerd[1598]: time="2026-01-17T00:05:36.087448224Z" level=info msg="shim disconnected" id=ebc2f5e1e8e415f3f9792e5a38abe1f468f9f8ce21432da64c4852a52e4abae4 namespace=k8s.io Jan 17 00:05:36.088267 containerd[1598]: time="2026-01-17T00:05:36.087875907Z" level=warning msg="cleaning up after shim disconnected" id=ebc2f5e1e8e415f3f9792e5a38abe1f468f9f8ce21432da64c4852a52e4abae4 namespace=k8s.io Jan 17 00:05:36.088267 containerd[1598]: time="2026-01-17T00:05:36.087910747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:36.316548 sshd[4562]: Accepted publickey for core from 4.153.228.146 port 50502 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:36.317464 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:36.322489 systemd-logind[1582]: New session 24 of user core. Jan 17 00:05:36.328629 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:05:36.535577 containerd[1598]: time="2026-01-17T00:05:36.534949821Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:05:36.554128 containerd[1598]: time="2026-01-17T00:05:36.553956804Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b0936e6fc1ea841c9f2ba84ee832ac1004488cd17ec84fd27415366606e1f0f\"" Jan 17 00:05:36.555008 containerd[1598]: time="2026-01-17T00:05:36.554849489Z" level=info msg="StartContainer for \"1b0936e6fc1ea841c9f2ba84ee832ac1004488cd17ec84fd27415366606e1f0f\"" Jan 17 00:05:36.603275 containerd[1598]: time="2026-01-17T00:05:36.603221033Z" level=info msg="StartContainer for \"1b0936e6fc1ea841c9f2ba84ee832ac1004488cd17ec84fd27415366606e1f0f\" returns successfully" Jan 17 00:05:36.642336 containerd[1598]: time="2026-01-17T00:05:36.642157805Z" level=info msg="shim disconnected" id=1b0936e6fc1ea841c9f2ba84ee832ac1004488cd17ec84fd27415366606e1f0f namespace=k8s.io Jan 17 00:05:36.642336 containerd[1598]: time="2026-01-17T00:05:36.642238725Z" level=warning msg="cleaning up after shim disconnected" id=1b0936e6fc1ea841c9f2ba84ee832ac1004488cd17ec84fd27415366606e1f0f namespace=k8s.io Jan 17 00:05:36.642336 containerd[1598]: time="2026-01-17T00:05:36.642247925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:36.735403 sshd[4562]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:36.741928 systemd[1]: sshd@29-46.224.97.13:22-4.153.228.146:50502.service: Deactivated successfully. Jan 17 00:05:36.745636 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:05:36.746064 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:05:36.748422 systemd-logind[1582]: Removed session 24. Jan 17 00:05:36.843854 systemd[1]: Started sshd@30-46.224.97.13:22-4.153.228.146:50518.service - OpenSSH per-connection server daemon (4.153.228.146:50518). Jan 17 00:05:37.063253 kubelet[2777]: E0117 00:05:37.063049 2777 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:05:37.440246 sshd[4735]: Accepted publickey for core from 4.153.228.146 port 50518 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:37.442716 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:37.450769 systemd-logind[1582]: New session 25 of user core. Jan 17 00:05:37.454753 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:05:37.545730 containerd[1598]: time="2026-01-17T00:05:37.545678467Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:05:37.567368 containerd[1598]: time="2026-01-17T00:05:37.567327104Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8\"" Jan 17 00:05:37.568382 containerd[1598]: time="2026-01-17T00:05:37.568336750Z" level=info msg="StartContainer for \"94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8\"" Jan 17 00:05:37.634559 containerd[1598]: time="2026-01-17T00:05:37.633553943Z" level=info msg="StartContainer for \"94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8\" returns successfully" Jan 17 00:05:37.664487 containerd[1598]: time="2026-01-17T00:05:37.664278189Z" level=info msg="shim disconnected" id=94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8 namespace=k8s.io Jan 17 00:05:37.664487 containerd[1598]: time="2026-01-17T00:05:37.664333749Z" level=warning msg="cleaning up after shim disconnected" id=94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8 namespace=k8s.io Jan 17 00:05:37.664487 containerd[1598]: time="2026-01-17T00:05:37.664344310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:37.723687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d5546d5e881ee6ce50e7e6f2c0a8baa0bddd904deebb2ad040b48f459ecfc8-rootfs.mount: Deactivated successfully. Jan 17 00:05:37.915517 kubelet[2777]: E0117 00:05:37.915410 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zxpkc" podUID="580c7c8d-d74f-4d5b-abdd-0990fa8818ed" Jan 17 00:05:38.548562 containerd[1598]: time="2026-01-17T00:05:38.548341918Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:05:38.567413 containerd[1598]: time="2026-01-17T00:05:38.567278860Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d\"" Jan 17 00:05:38.568208 containerd[1598]: time="2026-01-17T00:05:38.568068904Z" level=info msg="StartContainer for \"06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d\"" Jan 17 00:05:38.633408 containerd[1598]: time="2026-01-17T00:05:38.633348895Z" level=info msg="StartContainer for \"06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d\" returns successfully" Jan 17 00:05:38.659028 containerd[1598]: time="2026-01-17T00:05:38.658933073Z" level=info msg="shim disconnected" id=06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d namespace=k8s.io Jan 17 00:05:38.659028 containerd[1598]: time="2026-01-17T00:05:38.659028353Z" level=warning msg="cleaning up after shim disconnected" id=06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d namespace=k8s.io Jan 17 00:05:38.659273 containerd[1598]: time="2026-01-17T00:05:38.659045473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:38.723698 systemd[1]: run-containerd-runc-k8s.io-06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d-runc.yD9zLM.mount: Deactivated successfully. Jan 17 00:05:38.724053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06ab97d0f575bdaf9679bfe9289d1972b10ab318d83f132461c76a815265540d-rootfs.mount: Deactivated successfully. Jan 17 00:05:39.553521 containerd[1598]: time="2026-01-17T00:05:39.553462870Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:05:39.578152 containerd[1598]: time="2026-01-17T00:05:39.577161477Z" level=info msg="CreateContainer within sandbox \"7ff44f32918ca87147ba67f54cc3abd6864778a0192a73792f6ea22e9d024a84\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3213eb976b465851421a3b39c82a1fc41cf299625a6f53a1fa8dcdc3c3db670\"" Jan 17 00:05:39.578152 containerd[1598]: time="2026-01-17T00:05:39.577849001Z" level=info msg="StartContainer for \"f3213eb976b465851421a3b39c82a1fc41cf299625a6f53a1fa8dcdc3c3db670\"" Jan 17 00:05:39.631875 containerd[1598]: time="2026-01-17T00:05:39.631709729Z" level=info msg="StartContainer for \"f3213eb976b465851421a3b39c82a1fc41cf299625a6f53a1fa8dcdc3c3db670\" returns successfully" Jan 17 00:05:39.915611 kubelet[2777]: E0117 00:05:39.915399 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zxpkc" podUID="580c7c8d-d74f-4d5b-abdd-0990fa8818ed" Jan 17 00:05:39.973950 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 00:05:40.579474 kubelet[2777]: I0117 00:05:40.579413 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-299dn" podStartSLOduration=5.579395783 podStartE2EDuration="5.579395783s" podCreationTimestamp="2026-01-17 00:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:40.579242942 +0000 UTC m=+203.780700132" watchObservedRunningTime="2026-01-17 00:05:40.579395783 +0000 UTC m=+203.780852973" Jan 17 00:05:41.029164 kubelet[2777]: I0117 00:05:41.029036 2777 setters.go:602] "Node became not ready" node="ci-4081-3-6-n-ce65c18e74" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:05:41Z","lastTransitionTime":"2026-01-17T00:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:05:41.364930 systemd[1]: Started sshd@31-46.224.97.13:22-185.158.22.150:33747.service - OpenSSH per-connection server daemon (185.158.22.150:33747). Jan 17 00:05:41.915137 kubelet[2777]: E0117 00:05:41.915048 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zxpkc" podUID="580c7c8d-d74f-4d5b-abdd-0990fa8818ed" Jan 17 00:05:41.946219 sshd[4978]: Received disconnect from 185.158.22.150 port 33747:11: Bye Bye [preauth] Jan 17 00:05:41.946219 sshd[4978]: Disconnected from authenticating user root 185.158.22.150 port 33747 [preauth] Jan 17 00:05:41.950817 systemd[1]: sshd@31-46.224.97.13:22-185.158.22.150:33747.service: Deactivated successfully. Jan 17 00:05:42.017789 systemd[1]: run-containerd-runc-k8s.io-f3213eb976b465851421a3b39c82a1fc41cf299625a6f53a1fa8dcdc3c3db670-runc.D0DmOe.mount: Deactivated successfully. Jan 17 00:05:43.025199 systemd-networkd[1244]: lxc_health: Link UP Jan 17 00:05:43.047775 systemd-networkd[1244]: lxc_health: Gained carrier Jan 17 00:05:44.089237 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jan 17 00:05:48.645735 kubelet[2777]: E0117 00:05:48.645519 2777 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36854->127.0.0.1:39777: write tcp 127.0.0.1:36854->127.0.0.1:39777: write: broken pipe Jan 17 00:05:48.744513 sshd[4735]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:48.750666 systemd[1]: sshd@30-46.224.97.13:22-4.153.228.146:50518.service: Deactivated successfully. Jan 17 00:05:48.751430 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:05:48.754260 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:05:48.755929 systemd-logind[1582]: Removed session 25. Jan 17 00:05:58.556419 systemd[1]: Started sshd@32-46.224.97.13:22-14.103.118.136:33318.service - OpenSSH per-connection server daemon (14.103.118.136:33318). Jan 17 00:06:03.678048 kubelet[2777]: E0117 00:06:03.675845 2777 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52688->10.0.0.2:2379: read: connection timed out" Jan 17 00:06:03.714348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4-rootfs.mount: Deactivated successfully. Jan 17 00:06:03.730386 containerd[1598]: time="2026-01-17T00:06:03.730308585Z" level=info msg="shim disconnected" id=0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4 namespace=k8s.io Jan 17 00:06:03.730386 containerd[1598]: time="2026-01-17T00:06:03.730377866Z" level=warning msg="cleaning up after shim disconnected" id=0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4 namespace=k8s.io Jan 17 00:06:03.731007 containerd[1598]: time="2026-01-17T00:06:03.730390146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:04.355840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405-rootfs.mount: Deactivated successfully. Jan 17 00:06:04.358275 containerd[1598]: time="2026-01-17T00:06:04.358207396Z" level=info msg="shim disconnected" id=7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405 namespace=k8s.io Jan 17 00:06:04.358619 containerd[1598]: time="2026-01-17T00:06:04.358425957Z" level=warning msg="cleaning up after shim disconnected" id=7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405 namespace=k8s.io Jan 17 00:06:04.358619 containerd[1598]: time="2026-01-17T00:06:04.358441837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:04.624205 kubelet[2777]: I0117 00:06:04.623722 2777 scope.go:117] "RemoveContainer" containerID="7afeb9dd3bd8f3fa150e1519ad522bf1e58e037caae9acc84ad06dd1f7cc7405" Jan 17 00:06:04.627094 containerd[1598]: time="2026-01-17T00:06:04.627038105Z" level=info msg="CreateContainer within sandbox \"3bc37c636dd56211e55268554653b6c94ad455db35be035e7a15977cc262a60a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:06:04.629321 kubelet[2777]: I0117 00:06:04.629001 2777 scope.go:117] "RemoveContainer" containerID="0d9b85362be38b24f081a54dcbb52dbbad5dca68a03cdd02f43e4136862159c4" Jan 17 00:06:04.633108 containerd[1598]: time="2026-01-17T00:06:04.632753052Z" level=info msg="CreateContainer within sandbox \"4ef47532ba2c0e829172df77a8b4d5258950a0edf597dd83c5587fa7837993f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:06:04.646719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102282690.mount: Deactivated successfully. Jan 17 00:06:04.653946 containerd[1598]: time="2026-01-17T00:06:04.653839552Z" level=info msg="CreateContainer within sandbox \"3bc37c636dd56211e55268554653b6c94ad455db35be035e7a15977cc262a60a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5d5a97b89db3c97edd848ef2ec5b041f1e7b4c65caa1279f2ad33efc9aed69dd\"" Jan 17 00:06:04.654331 containerd[1598]: time="2026-01-17T00:06:04.654295314Z" level=info msg="CreateContainer within sandbox \"4ef47532ba2c0e829172df77a8b4d5258950a0edf597dd83c5587fa7837993f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6b12aced2a4bd47139729776c2c3cb02b204b2ea99533a5979bb16395fe73591\"" Jan 17 00:06:04.656050 containerd[1598]: time="2026-01-17T00:06:04.654732716Z" level=info msg="StartContainer for \"5d5a97b89db3c97edd848ef2ec5b041f1e7b4c65caa1279f2ad33efc9aed69dd\"" Jan 17 00:06:04.656050 containerd[1598]: time="2026-01-17T00:06:04.654762596Z" level=info msg="StartContainer for \"6b12aced2a4bd47139729776c2c3cb02b204b2ea99533a5979bb16395fe73591\"" Jan 17 00:06:04.741415 containerd[1598]: time="2026-01-17T00:06:04.741340005Z" level=info msg="StartContainer for \"5d5a97b89db3c97edd848ef2ec5b041f1e7b4c65caa1279f2ad33efc9aed69dd\" returns successfully" Jan 17 00:06:04.742482 containerd[1598]: time="2026-01-17T00:06:04.741340045Z" level=info msg="StartContainer for \"6b12aced2a4bd47139729776c2c3cb02b204b2ea99533a5979bb16395fe73591\" returns successfully" Jan 17 00:06:07.899100 kubelet[2777]: E0117 00:06:07.898834 2777 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52504->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-ce65c18e74.188b5bee3dd0c972 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-ce65c18e74,UID:984aa048de2b1d8b1df411b858fb274b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-ce65c18e74,},FirstTimestamp:2026-01-17 00:05:57.470488946 +0000 UTC m=+220.671946136,LastTimestamp:2026-01-17 00:05:57.470488946 +0000 UTC m=+220.671946136,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-ce65c18e74,}"