Dec 12 17:24:36.818121 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:24:36.818151 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:24:36.818161 kernel: KASLR enabled Dec 12 17:24:36.818168 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Dec 12 17:24:36.818174 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Dec 12 17:24:36.818181 kernel: random: crng init done Dec 12 17:24:36.818187 kernel: secureboot: Secure boot disabled Dec 12 17:24:36.818193 kernel: ACPI: Early table checksum verification disabled Dec 12 17:24:36.818199 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Dec 12 17:24:36.818205 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:24:36.818213 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818218 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818224 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818230 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818237 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818245 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818254 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818260 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818267 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:24:36.818274 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 17:24:36.818280 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 12 17:24:36.818286 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:24:36.818292 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 12 17:24:36.818299 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Dec 12 17:24:36.818305 kernel: Zone ranges: Dec 12 17:24:36.818311 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:24:36.818318 kernel: DMA32 empty Dec 12 17:24:36.818324 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 12 17:24:36.818330 kernel: Device empty Dec 12 17:24:36.818335 kernel: Movable zone start for each node Dec 12 17:24:36.818341 kernel: Early memory node ranges Dec 12 17:24:36.818347 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Dec 12 17:24:36.818353 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Dec 12 17:24:36.818359 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Dec 12 17:24:36.818365 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Dec 12 17:24:36.818371 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Dec 12 17:24:36.818377 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Dec 12 17:24:36.818383 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Dec 12 17:24:36.818390 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Dec 12 17:24:36.818396 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Dec 12 17:24:36.818406 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 12 17:24:36.818412 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 12 17:24:36.818419 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 12 17:24:36.818427 kernel: psci: probing for conduit method from ACPI. Dec 12 17:24:36.818433 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:24:36.818440 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:24:36.818446 kernel: psci: Trusted OS migration not required Dec 12 17:24:36.818452 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:24:36.818459 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:24:36.818465 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:24:36.818472 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:24:36.818478 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:24:36.818485 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:24:36.818491 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:24:36.818499 kernel: CPU features: detected: Spectre-v4 Dec 12 17:24:36.818506 kernel: CPU features: detected: Spectre-BHB Dec 12 17:24:36.818512 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:24:36.818531 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:24:36.818537 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:24:36.818544 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:24:36.818550 kernel: alternatives: applying boot alternatives Dec 12 17:24:36.818558 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:24:36.818565 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:24:36.818571 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:24:36.818577 kernel: Fallback order for Node 0: 0 Dec 12 17:24:36.818586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Dec 12 17:24:36.818592 kernel: Policy zone: Normal Dec 12 17:24:36.818598 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:24:36.818605 kernel: software IO TLB: area num 2. Dec 12 17:24:36.818611 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 12 17:24:36.818617 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:24:36.818624 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:24:36.818632 kernel: rcu: RCU event tracing is enabled. Dec 12 17:24:36.818639 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:24:36.818646 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:24:36.818653 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:24:36.818659 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:24:36.818667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:24:36.818673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:24:36.818680 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:24:36.818686 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:24:36.818693 kernel: GICv3: 256 SPIs implemented Dec 12 17:24:36.818699 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:24:36.818705 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:24:36.818712 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:24:36.818718 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:24:36.818724 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:24:36.818731 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:24:36.818739 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:24:36.818746 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:24:36.818753 kernel: GICv3: using LPI property table @0x0000000100120000 Dec 12 17:24:36.818759 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Dec 12 17:24:36.818765 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:24:36.818772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:36.818779 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:24:36.818785 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:24:36.818792 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:24:36.818798 kernel: Console: colour dummy device 80x25 Dec 12 17:24:36.818805 kernel: ACPI: Core revision 20240827 Dec 12 17:24:36.818814 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:24:36.818820 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:24:36.820892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:24:36.820927 kernel: landlock: Up and running. Dec 12 17:24:36.820935 kernel: SELinux: Initializing. Dec 12 17:24:36.820942 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:24:36.820949 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:24:36.820956 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:24:36.820964 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:24:36.820977 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:24:36.820984 kernel: Remapping and enabling EFI services. Dec 12 17:24:36.820991 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:24:36.820998 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:24:36.821005 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:24:36.821012 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Dec 12 17:24:36.821019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:24:36.821026 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:24:36.821033 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:24:36.821041 kernel: SMP: Total of 2 processors activated. Dec 12 17:24:36.821053 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:24:36.821060 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:24:36.821068 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:24:36.821075 kernel: CPU features: detected: Common not Private translations Dec 12 17:24:36.821082 kernel: CPU features: detected: CRC32 instructions Dec 12 17:24:36.821089 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:24:36.821097 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:24:36.821105 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:24:36.821113 kernel: CPU features: detected: Privileged Access Never Dec 12 17:24:36.821120 kernel: CPU features: detected: RAS Extension Support Dec 12 17:24:36.821127 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:24:36.821134 kernel: alternatives: applying system-wide alternatives Dec 12 17:24:36.821141 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 12 17:24:36.821149 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Dec 12 17:24:36.821157 kernel: devtmpfs: initialized Dec 12 17:24:36.821164 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:24:36.821172 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:24:36.821180 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:24:36.821187 kernel: 0 pages in range for non-PLT usage Dec 12 17:24:36.821194 kernel: 508400 pages in range for PLT usage Dec 12 17:24:36.821201 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:24:36.821208 kernel: SMBIOS 3.0.0 present. Dec 12 17:24:36.821215 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 12 17:24:36.821222 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:24:36.821229 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:24:36.821238 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:24:36.821245 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:24:36.821253 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:24:36.821260 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:24:36.821267 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Dec 12 17:24:36.821274 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:24:36.821281 kernel: cpuidle: using governor menu Dec 12 17:24:36.821289 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:24:36.821296 kernel: ASID allocator initialised with 32768 entries Dec 12 17:24:36.821305 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:24:36.821312 kernel: Serial: AMBA PL011 UART driver Dec 12 17:24:36.821319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:24:36.821326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:24:36.821334 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:24:36.821341 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:24:36.821348 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:24:36.821355 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:24:36.821362 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:24:36.821371 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:24:36.821378 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:24:36.821385 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:24:36.821392 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:24:36.821399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:24:36.821406 kernel: ACPI: Interpreter enabled Dec 12 17:24:36.821413 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:24:36.821422 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:24:36.821430 kernel: ACPI: CPU0 has been hot-added Dec 12 17:24:36.821440 kernel: ACPI: CPU1 has been hot-added Dec 12 17:24:36.821448 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:24:36.821455 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:24:36.821462 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:24:36.821657 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:24:36.821724 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:24:36.821782 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:24:36.821857 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:24:36.823984 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:24:36.824018 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:24:36.824026 kernel: PCI host bridge to bus 0000:00 Dec 12 17:24:36.824113 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:24:36.824170 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:24:36.824222 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:24:36.824273 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:24:36.824365 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:24:36.824438 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Dec 12 17:24:36.824512 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Dec 12 17:24:36.824639 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Dec 12 17:24:36.824711 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.824773 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Dec 12 17:24:36.824914 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 12 17:24:36.824984 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:24:36.825044 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 12 17:24:36.825121 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.825183 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Dec 12 17:24:36.825242 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 12 17:24:36.825301 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:24:36.825371 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.825431 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Dec 12 17:24:36.825489 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 12 17:24:36.825561 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:24:36.825622 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 12 17:24:36.825691 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.825751 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Dec 12 17:24:36.825818 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 12 17:24:36.825905 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:24:36.825969 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 12 17:24:36.826037 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.826097 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Dec 12 17:24:36.826155 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 12 17:24:36.826216 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:24:36.826278 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 12 17:24:36.826439 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.826509 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Dec 12 17:24:36.826588 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 12 17:24:36.828644 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:24:36.828736 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 12 17:24:36.828812 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.828917 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Dec 12 17:24:36.828980 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 12 17:24:36.829042 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:24:36.829102 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Dec 12 17:24:36.829175 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.829237 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Dec 12 17:24:36.829300 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 12 17:24:36.829359 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:24:36.829452 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:24:36.829525 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Dec 12 17:24:36.829593 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 12 17:24:36.829656 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:24:36.829726 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Dec 12 17:24:36.829791 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Dec 12 17:24:36.830003 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 17:24:36.830074 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Dec 12 17:24:36.830135 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:24:36.830196 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 17:24:36.830266 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 17:24:36.830328 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Dec 12 17:24:36.830403 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Dec 12 17:24:36.830467 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Dec 12 17:24:36.830572 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 12 17:24:36.830655 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:24:36.830718 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 12 17:24:36.830789 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:24:36.830871 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Dec 12 17:24:36.830934 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 12 17:24:36.831009 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Dec 12 17:24:36.831073 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Dec 12 17:24:36.831133 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 12 17:24:36.831202 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 17:24:36.831264 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Dec 12 17:24:36.831329 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Dec 12 17:24:36.831391 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 17:24:36.831455 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 12 17:24:36.831524 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:24:36.831590 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:24:36.831659 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 12 17:24:36.831724 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 12 17:24:36.831793 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 12 17:24:36.831911 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 17:24:36.831984 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:24:36.832044 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:24:36.832106 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 17:24:36.832166 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 12 17:24:36.832226 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 12 17:24:36.832294 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 17:24:36.832354 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:24:36.832413 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:24:36.832474 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 17:24:36.832548 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:24:36.832609 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:24:36.832674 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 17:24:36.832735 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 12 17:24:36.832796 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 12 17:24:36.832896 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 17:24:36.832964 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:24:36.833023 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:24:36.833086 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 17:24:36.833148 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:24:36.833207 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:24:36.833267 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 12 17:24:36.833325 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 12 17:24:36.833386 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 12 17:24:36.833445 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 12 17:24:36.833505 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 12 17:24:36.833604 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 12 17:24:36.833745 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 12 17:24:36.833821 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 12 17:24:36.833910 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 12 17:24:36.833974 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 12 17:24:36.834039 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 12 17:24:36.834099 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 12 17:24:36.834161 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 12 17:24:36.834228 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 12 17:24:36.834290 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 12 17:24:36.834351 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 12 17:24:36.834412 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 12 17:24:36.834473 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 12 17:24:36.834559 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Dec 12 17:24:36.834622 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Dec 12 17:24:36.834684 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Dec 12 17:24:36.834747 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:24:36.834810 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Dec 12 17:24:36.834910 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:24:36.834975 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Dec 12 17:24:36.835039 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:24:36.835099 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Dec 12 17:24:36.835157 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:24:36.835217 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Dec 12 17:24:36.835276 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:24:36.835337 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Dec 12 17:24:36.835395 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:24:36.835457 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Dec 12 17:24:36.835552 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:24:36.835625 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Dec 12 17:24:36.835688 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:24:36.835750 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Dec 12 17:24:36.835814 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:24:36.835918 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Dec 12 17:24:36.835988 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 12 17:24:36.836054 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:24:36.836119 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 12 17:24:36.836179 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 12 17:24:36.836238 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 12 17:24:36.836298 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:24:36.836360 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:24:36.836428 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 12 17:24:36.836490 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 12 17:24:36.836568 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 12 17:24:36.836630 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:24:36.836691 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:24:36.836759 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 12 17:24:36.836854 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 12 17:24:36.836928 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 12 17:24:36.836991 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 12 17:24:36.837054 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:24:36.837113 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:24:36.837180 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 12 17:24:36.837252 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 12 17:24:36.837313 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 12 17:24:36.837373 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:24:36.837431 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:24:36.837501 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 12 17:24:36.837607 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 12 17:24:36.837670 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 12 17:24:36.837730 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 12 17:24:36.837789 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:24:36.837870 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:24:36.837940 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 12 17:24:36.838007 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 12 17:24:36.838068 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 12 17:24:36.838143 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 12 17:24:36.838202 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:24:36.838261 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:24:36.838329 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Dec 12 17:24:36.838391 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Dec 12 17:24:36.838453 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Dec 12 17:24:36.838523 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 12 17:24:36.838593 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 12 17:24:36.838654 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:24:36.838717 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:24:36.838780 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 12 17:24:36.838869 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 12 17:24:36.838933 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:24:36.838993 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:24:36.839068 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 12 17:24:36.839127 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 12 17:24:36.839187 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:24:36.839250 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:24:36.839316 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:24:36.839369 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:24:36.839423 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:24:36.839493 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 12 17:24:36.839589 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 12 17:24:36.839662 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:24:36.839729 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 12 17:24:36.839787 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 12 17:24:36.839864 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:24:36.839940 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 12 17:24:36.839995 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 12 17:24:36.840055 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:24:36.840118 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 12 17:24:36.840172 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 12 17:24:36.840226 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:24:36.840288 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 12 17:24:36.840343 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 12 17:24:36.840398 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:24:36.840463 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 12 17:24:36.840533 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 12 17:24:36.840591 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:24:36.840657 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 12 17:24:36.840714 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 12 17:24:36.840768 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:24:36.840840 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 12 17:24:36.840902 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 12 17:24:36.840956 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:24:36.841021 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 12 17:24:36.841078 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 12 17:24:36.841134 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:24:36.841146 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:24:36.841154 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:24:36.841164 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:24:36.841172 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:24:36.841179 kernel: iommu: Default domain type: Translated Dec 12 17:24:36.841187 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:24:36.841195 kernel: efivars: Registered efivars operations Dec 12 17:24:36.841203 kernel: vgaarb: loaded Dec 12 17:24:36.841210 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:24:36.841218 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:24:36.841226 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:24:36.841236 kernel: pnp: PnP ACPI init Dec 12 17:24:36.841312 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:24:36.841324 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:24:36.841332 kernel: NET: Registered PF_INET protocol family Dec 12 17:24:36.841339 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:24:36.841347 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:24:36.841355 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:24:36.841362 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:24:36.841372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:24:36.841380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:24:36.841391 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:24:36.841400 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:24:36.841408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:24:36.841479 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 12 17:24:36.841490 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:24:36.841498 kernel: kvm [1]: HYP mode not available Dec 12 17:24:36.841506 kernel: Initialise system trusted keyrings Dec 12 17:24:36.841523 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:24:36.841532 kernel: Key type asymmetric registered Dec 12 17:24:36.841540 kernel: Asymmetric key parser 'x509' registered Dec 12 17:24:36.841547 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:24:36.841555 kernel: io scheduler mq-deadline registered Dec 12 17:24:36.841562 kernel: io scheduler kyber registered Dec 12 17:24:36.841570 kernel: io scheduler bfq registered Dec 12 17:24:36.841578 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:24:36.841646 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 12 17:24:36.841710 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 12 17:24:36.841772 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.841844 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 12 17:24:36.841908 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 12 17:24:36.841967 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.842029 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 12 17:24:36.842088 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 12 17:24:36.842146 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.842210 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 12 17:24:36.842270 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 12 17:24:36.842327 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.842389 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 12 17:24:36.842449 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 12 17:24:36.842506 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.842608 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 12 17:24:36.842678 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 12 17:24:36.842743 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.842808 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 12 17:24:36.842897 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 12 17:24:36.842958 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.843022 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 12 17:24:36.843082 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 12 17:24:36.843140 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.843154 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 12 17:24:36.843216 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 12 17:24:36.843276 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 12 17:24:36.843335 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:24:36.843345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:24:36.843353 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:24:36.843361 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:24:36.843429 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 12 17:24:36.843494 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 12 17:24:36.843506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:24:36.843514 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:24:36.843591 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 12 17:24:36.843602 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 12 17:24:36.843610 kernel: thunder_xcv, ver 1.0 Dec 12 17:24:36.843617 kernel: thunder_bgx, ver 1.0 Dec 12 17:24:36.843624 kernel: nicpf, ver 1.0 Dec 12 17:24:36.843632 kernel: nicvf, ver 1.0 Dec 12 17:24:36.843706 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:24:36.843769 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:24:36 UTC (1765560276) Dec 12 17:24:36.843779 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:24:36.843787 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:24:36.843795 kernel: watchdog: NMI not fully supported Dec 12 17:24:36.843802 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:24:36.843809 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:24:36.843817 kernel: Segment Routing with IPv6 Dec 12 17:24:36.843824 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:24:36.843851 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:24:36.843858 kernel: Key type dns_resolver registered Dec 12 17:24:36.843865 kernel: registered taskstats version 1 Dec 12 17:24:36.843873 kernel: Loading compiled-in X.509 certificates Dec 12 17:24:36.843880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:24:36.843887 kernel: Demotion targets for Node 0: null Dec 12 17:24:36.843895 kernel: Key type .fscrypt registered Dec 12 17:24:36.843902 kernel: Key type fscrypt-provisioning registered Dec 12 17:24:36.843909 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:24:36.843918 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:24:36.843926 kernel: ima: No architecture policies found Dec 12 17:24:36.843934 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:24:36.843942 kernel: clk: Disabling unused clocks Dec 12 17:24:36.843949 kernel: PM: genpd: Disabling unused power domains Dec 12 17:24:36.843957 kernel: Warning: unable to open an initial console. Dec 12 17:24:36.843965 kernel: Freeing unused kernel memory: 39552K Dec 12 17:24:36.843972 kernel: Run /init as init process Dec 12 17:24:36.843980 kernel: with arguments: Dec 12 17:24:36.843990 kernel: /init Dec 12 17:24:36.843997 kernel: with environment: Dec 12 17:24:36.844005 kernel: HOME=/ Dec 12 17:24:36.844012 kernel: TERM=linux Dec 12 17:24:36.844020 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:24:36.844033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:36.844041 systemd[1]: Detected virtualization kvm. Dec 12 17:24:36.844051 systemd[1]: Detected architecture arm64. Dec 12 17:24:36.844059 systemd[1]: Running in initrd. Dec 12 17:24:36.844066 systemd[1]: No hostname configured, using default hostname. Dec 12 17:24:36.844075 systemd[1]: Hostname set to . Dec 12 17:24:36.844082 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:24:36.844090 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:24:36.844098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:36.844106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:36.844116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:24:36.844124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:36.844131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:24:36.844140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:24:36.844149 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:24:36.844157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:24:36.844165 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:36.844174 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:36.844182 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:24:36.844190 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:36.844197 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:36.844206 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:24:36.844214 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:36.844222 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:36.844230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:24:36.844239 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:24:36.844249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:36.844257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:36.844265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:36.844272 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:24:36.844280 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:24:36.844288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:36.844296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:24:36.844305 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:24:36.844314 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:24:36.844322 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:36.844331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:36.844338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:36.844346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:36.844355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:36.844364 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:24:36.844372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:36.844411 systemd-journald[246]: Collecting audit messages is disabled. Dec 12 17:24:36.844437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:24:36.844446 kernel: Bridge firewalling registered Dec 12 17:24:36.844454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:36.844462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:36.844470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:36.844478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:24:36.844486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:36.844496 systemd-journald[246]: Journal started Dec 12 17:24:36.844525 systemd-journald[246]: Runtime Journal (/run/log/journal/8f86610f8fc1457dad7b5638b5ea3d4f) is 8M, max 76.5M, 68.5M free. Dec 12 17:24:36.802497 systemd-modules-load[247]: Inserted module 'overlay' Dec 12 17:24:36.823884 systemd-modules-load[247]: Inserted module 'br_netfilter' Dec 12 17:24:36.848861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:36.850872 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:36.856428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:36.868399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:36.873032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:36.881134 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:24:36.884327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:36.886293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:36.889019 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:24:36.891306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:36.930991 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:24:36.943195 systemd-resolved[287]: Positive Trust Anchors: Dec 12 17:24:36.943213 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:36.943245 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:36.951386 systemd-resolved[287]: Defaulting to hostname 'linux'. Dec 12 17:24:36.952609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:36.953334 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:37.056867 kernel: SCSI subsystem initialized Dec 12 17:24:37.061896 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:24:37.070898 kernel: iscsi: registered transport (tcp) Dec 12 17:24:37.084862 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:24:37.084947 kernel: QLogic iSCSI HBA Driver Dec 12 17:24:37.111375 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:37.133064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:37.138129 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:37.194487 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:37.196925 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:24:37.260902 kernel: raid6: neonx8 gen() 15654 MB/s Dec 12 17:24:37.277892 kernel: raid6: neonx4 gen() 15606 MB/s Dec 12 17:24:37.294879 kernel: raid6: neonx2 gen() 13095 MB/s Dec 12 17:24:37.311903 kernel: raid6: neonx1 gen() 10394 MB/s Dec 12 17:24:37.328884 kernel: raid6: int64x8 gen() 6848 MB/s Dec 12 17:24:37.345890 kernel: raid6: int64x4 gen() 7308 MB/s Dec 12 17:24:37.362863 kernel: raid6: int64x2 gen() 6084 MB/s Dec 12 17:24:37.379888 kernel: raid6: int64x1 gen() 5030 MB/s Dec 12 17:24:37.379942 kernel: raid6: using algorithm neonx8 gen() 15654 MB/s Dec 12 17:24:37.396903 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Dec 12 17:24:37.396957 kernel: raid6: using neon recovery algorithm Dec 12 17:24:37.402138 kernel: xor: measuring software checksum speed Dec 12 17:24:37.402193 kernel: 8regs : 21533 MB/sec Dec 12 17:24:37.402212 kernel: 32regs : 21687 MB/sec Dec 12 17:24:37.402859 kernel: arm64_neon : 28041 MB/sec Dec 12 17:24:37.402887 kernel: xor: using function: arm64_neon (28041 MB/sec) Dec 12 17:24:37.457878 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:24:37.467693 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:37.470630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:37.503216 systemd-udevd[495]: Using default interface naming scheme 'v255'. Dec 12 17:24:37.507871 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:37.514607 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:24:37.542975 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Dec 12 17:24:37.577046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:37.579807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:37.656805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:37.659699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:24:37.771949 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Dec 12 17:24:37.772614 kernel: scsi host0: Virtio SCSI HBA Dec 12 17:24:37.780278 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 12 17:24:37.780367 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 17:24:37.806740 kernel: ACPI: bus type USB registered Dec 12 17:24:37.806846 kernel: usbcore: registered new interface driver usbfs Dec 12 17:24:37.806870 kernel: usbcore: registered new interface driver hub Dec 12 17:24:37.806890 kernel: usbcore: registered new device driver usb Dec 12 17:24:37.810183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:37.810322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:37.812279 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:37.816133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:37.821750 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:24:37.825989 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 12 17:24:37.826226 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 12 17:24:37.826307 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 12 17:24:37.826380 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 12 17:24:37.826452 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 17:24:37.837206 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:24:37.837290 kernel: GPT:17805311 != 80003071 Dec 12 17:24:37.837311 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:24:37.838500 kernel: GPT:17805311 != 80003071 Dec 12 17:24:37.838563 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:24:37.839211 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:24:37.842185 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 12 17:24:37.848864 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 12 17:24:37.850852 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 12 17:24:37.851051 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 17:24:37.854687 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:24:37.854902 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 12 17:24:37.855009 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 12 17:24:37.855088 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 17:24:37.860314 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:24:37.860568 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 12 17:24:37.862314 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 12 17:24:37.862913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:37.865959 kernel: hub 1-0:1.0: USB hub found Dec 12 17:24:37.866957 kernel: hub 1-0:1.0: 4 ports detected Dec 12 17:24:37.868851 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 17:24:37.869073 kernel: hub 2-0:1.0: USB hub found Dec 12 17:24:37.869164 kernel: hub 2-0:1.0: 4 ports detected Dec 12 17:24:37.941994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 17:24:37.961776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 17:24:37.971389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 17:24:37.973732 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 17:24:37.993962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 17:24:37.999701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:24:38.003889 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:38.005376 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:38.008989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:38.010137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:38.012134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:24:38.036659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:38.041863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:24:38.041957 disk-uuid[602]: Primary Header is updated. Dec 12 17:24:38.041957 disk-uuid[602]: Secondary Entries is updated. Dec 12 17:24:38.041957 disk-uuid[602]: Secondary Header is updated. Dec 12 17:24:38.108863 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 17:24:38.245095 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 12 17:24:38.245157 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 12 17:24:38.245863 kernel: usbcore: registered new interface driver usbhid Dec 12 17:24:38.245895 kernel: usbhid: USB HID core driver Dec 12 17:24:38.350601 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 12 17:24:38.477892 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 12 17:24:38.531542 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 12 17:24:39.074569 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:24:39.074644 disk-uuid[610]: The operation has completed successfully. Dec 12 17:24:39.168772 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:24:39.168934 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:24:39.186170 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:24:39.222613 sh[626]: Success Dec 12 17:24:39.240875 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:24:39.240975 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:24:39.242700 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:24:39.253860 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:24:39.316398 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:24:39.319192 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:24:39.339855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:24:39.352882 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (638) Dec 12 17:24:39.354874 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:24:39.354958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:39.367062 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 17:24:39.367150 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:24:39.367170 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:24:39.370215 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:24:39.370932 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:39.372029 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:24:39.372913 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:24:39.378114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:24:39.421881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (670) Dec 12 17:24:39.423856 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:24:39.423917 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:39.430478 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 17:24:39.430564 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:39.430575 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:39.438160 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:24:39.440005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:24:39.444231 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:24:39.554681 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:39.558576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:39.616242 systemd-networkd[812]: lo: Link UP Dec 12 17:24:39.616255 systemd-networkd[812]: lo: Gained carrier Dec 12 17:24:39.618660 systemd-networkd[812]: Enumeration completed Dec 12 17:24:39.618876 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:39.619825 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:39.619838 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:39.620163 systemd[1]: Reached target network.target - Network. Dec 12 17:24:39.621209 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:39.621213 systemd-networkd[812]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:39.622253 systemd-networkd[812]: eth0: Link UP Dec 12 17:24:39.622430 systemd-networkd[812]: eth1: Link UP Dec 12 17:24:39.622650 systemd-networkd[812]: eth0: Gained carrier Dec 12 17:24:39.622662 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:39.626757 systemd-networkd[812]: eth1: Gained carrier Dec 12 17:24:39.626776 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:39.636878 ignition[718]: Ignition 2.22.0 Dec 12 17:24:39.636888 ignition[718]: Stage: fetch-offline Dec 12 17:24:39.639463 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:39.636931 ignition[718]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:39.641733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:24:39.636939 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:39.637026 ignition[718]: parsed url from cmdline: "" Dec 12 17:24:39.637029 ignition[718]: no config URL provided Dec 12 17:24:39.637033 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:39.637039 ignition[718]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:39.637044 ignition[718]: failed to fetch config: resource requires networking Dec 12 17:24:39.637209 ignition[718]: Ignition finished successfully Dec 12 17:24:39.672747 ignition[818]: Ignition 2.22.0 Dec 12 17:24:39.672766 ignition[818]: Stage: fetch Dec 12 17:24:39.672926 systemd-networkd[812]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 12 17:24:39.673962 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:39.673976 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:39.674073 ignition[818]: parsed url from cmdline: "" Dec 12 17:24:39.674076 ignition[818]: no config URL provided Dec 12 17:24:39.674083 ignition[818]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:39.674090 ignition[818]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:39.674140 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 12 17:24:39.674618 ignition[818]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 17:24:39.679954 systemd-networkd[812]: eth0: DHCPv4 address 91.99.219.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 12 17:24:39.875543 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 12 17:24:39.883701 ignition[818]: GET result: OK Dec 12 17:24:39.884129 ignition[818]: parsing config with SHA512: bcb6e2b7e05f7d97e6fa3e5cd63f3f0d09d8486815c0844b8b84fdcbb7a752fe7a028e644778860a6135a27212fa8b9c4ea27248b69b0f3b1ae93ca4bf0b653d Dec 12 17:24:39.892813 unknown[818]: fetched base config from "system" Dec 12 17:24:39.892853 unknown[818]: fetched base config from "system" Dec 12 17:24:39.893408 ignition[818]: fetch: fetch complete Dec 12 17:24:39.892860 unknown[818]: fetched user config from "hetzner" Dec 12 17:24:39.893416 ignition[818]: fetch: fetch passed Dec 12 17:24:39.893470 ignition[818]: Ignition finished successfully Dec 12 17:24:39.897954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:24:39.900185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:24:39.942689 ignition[825]: Ignition 2.22.0 Dec 12 17:24:39.942713 ignition[825]: Stage: kargs Dec 12 17:24:39.942886 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:39.942896 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:39.943945 ignition[825]: kargs: kargs passed Dec 12 17:24:39.944002 ignition[825]: Ignition finished successfully Dec 12 17:24:39.947944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:24:39.951994 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:24:39.984108 ignition[831]: Ignition 2.22.0 Dec 12 17:24:39.984127 ignition[831]: Stage: disks Dec 12 17:24:39.984290 ignition[831]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:39.984298 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:39.985225 ignition[831]: disks: disks passed Dec 12 17:24:39.987983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:24:39.985287 ignition[831]: Ignition finished successfully Dec 12 17:24:39.991106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:39.992390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:24:39.993681 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:39.996001 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:24:39.997417 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:24:39.999901 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:24:40.050206 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 12 17:24:40.055401 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:24:40.058687 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:24:40.161862 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:24:40.162300 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:24:40.165062 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:40.167968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:40.169676 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:24:40.174157 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 17:24:40.175559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:24:40.175600 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:40.189288 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:24:40.191446 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:24:40.208774 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Dec 12 17:24:40.208860 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:24:40.209892 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:40.218948 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 17:24:40.219024 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:40.219882 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:40.225438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:40.257614 coreos-metadata[850]: Dec 12 17:24:40.257 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 12 17:24:40.260601 coreos-metadata[850]: Dec 12 17:24:40.260 INFO Fetch successful Dec 12 17:24:40.263803 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:24:40.264728 coreos-metadata[850]: Dec 12 17:24:40.264 INFO wrote hostname ci-4459-2-2-4-c728b0285d to /sysroot/etc/hostname Dec 12 17:24:40.267260 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:24:40.273312 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:24:40.279370 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:24:40.286021 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:24:40.408504 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:40.411410 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:24:40.412725 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:24:40.438232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:24:40.440848 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:24:40.465094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:24:40.477921 ignition[965]: INFO : Ignition 2.22.0 Dec 12 17:24:40.477921 ignition[965]: INFO : Stage: mount Dec 12 17:24:40.479514 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:40.479514 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:40.482007 ignition[965]: INFO : mount: mount passed Dec 12 17:24:40.482664 ignition[965]: INFO : Ignition finished successfully Dec 12 17:24:40.485115 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:24:40.487010 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:24:40.515574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:40.544873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Dec 12 17:24:40.547229 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:24:40.547274 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:40.551971 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 17:24:40.552029 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:40.552053 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:40.555436 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:40.591263 ignition[994]: INFO : Ignition 2.22.0 Dec 12 17:24:40.591263 ignition[994]: INFO : Stage: files Dec 12 17:24:40.592434 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:40.592434 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:40.594405 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:24:40.594405 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:24:40.594405 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:24:40.598051 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:24:40.598890 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:24:40.599861 unknown[994]: wrote ssh authorized keys file for user: core Dec 12 17:24:40.601811 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:24:40.604525 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:24:40.604525 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:24:40.651843 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:24:40.740526 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:24:40.740526 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:24:40.740526 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:24:40.833725 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:24:41.031777 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:24:41.031777 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:41.034607 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:41.042561 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:24:41.107104 systemd-networkd[812]: eth0: Gained IPv6LL Dec 12 17:24:41.265413 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:24:41.618947 systemd-networkd[812]: eth1: Gained IPv6LL Dec 12 17:24:41.808346 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:41.810086 ignition[994]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:24:41.811248 ignition[994]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:41.814101 ignition[994]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:41.814101 ignition[994]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:24:41.814101 ignition[994]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:24:41.818378 ignition[994]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 17:24:41.818378 ignition[994]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 17:24:41.818378 ignition[994]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:24:41.818378 ignition[994]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:41.818378 ignition[994]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:41.818378 ignition[994]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:41.818378 ignition[994]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:41.818378 ignition[994]: INFO : files: files passed Dec 12 17:24:41.818378 ignition[994]: INFO : Ignition finished successfully Dec 12 17:24:41.818226 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:24:41.823657 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:24:41.828632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:24:41.840299 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:24:41.840442 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:24:41.850256 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:41.850256 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:41.853094 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:41.855010 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:41.856158 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:24:41.858578 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:24:41.930319 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:24:41.930434 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:24:41.932779 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:24:41.933475 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:24:41.934771 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:24:41.935697 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:24:41.980307 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:41.983300 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:24:42.008307 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:42.009786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:42.011124 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:24:42.012241 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:24:42.012911 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:42.014397 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:24:42.015638 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:24:42.016871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:24:42.018210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:42.019405 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:42.020667 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:42.021937 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:24:42.023213 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:42.024388 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:24:42.025449 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:24:42.026374 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:24:42.027305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:24:42.027439 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:42.028741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:42.029442 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:42.030519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:24:42.030955 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:42.031662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:24:42.031790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:42.033329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:24:42.033457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:42.034840 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:24:42.034950 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:24:42.035774 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 17:24:42.035897 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:24:42.037757 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:24:42.042151 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:24:42.043431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:24:42.043629 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:42.045645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:24:42.045762 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:42.052555 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:24:42.059165 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:24:42.073985 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:24:42.079185 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:24:42.080912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:24:42.085680 ignition[1047]: INFO : Ignition 2.22.0 Dec 12 17:24:42.086935 ignition[1047]: INFO : Stage: umount Dec 12 17:24:42.086935 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:42.086935 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 12 17:24:42.088741 ignition[1047]: INFO : umount: umount passed Dec 12 17:24:42.089944 ignition[1047]: INFO : Ignition finished successfully Dec 12 17:24:42.090710 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:24:42.090820 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:24:42.093178 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:24:42.094000 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:24:42.094678 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:24:42.094730 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:24:42.095888 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:24:42.095951 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:24:42.096812 systemd[1]: Stopped target network.target - Network. Dec 12 17:24:42.097695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:24:42.097778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:42.098856 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:24:42.099686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:24:42.103958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:42.107083 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:24:42.108126 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:24:42.109308 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:24:42.109357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:42.110408 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:24:42.110486 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:42.111442 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:24:42.111515 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:24:42.112403 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:24:42.112443 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:42.113408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:24:42.113463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:42.114635 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:24:42.115661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:24:42.122411 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:24:42.122592 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:24:42.128111 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:24:42.129696 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:24:42.129924 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:42.133694 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:24:42.134061 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:24:42.134198 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:24:42.136376 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:24:42.137219 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:24:42.138330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:24:42.138371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:42.140505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:24:42.141950 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:24:42.142028 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:42.143158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:24:42.143200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:42.145098 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:24:42.145160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:42.145936 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:42.152452 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:24:42.163951 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:24:42.165904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:42.168710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:24:42.168806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:42.171667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:24:42.171735 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:42.172707 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:24:42.172764 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:42.175656 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:24:42.175738 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:42.176791 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:24:42.176875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:42.184059 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:24:42.186243 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:24:42.186358 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:42.190621 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:24:42.191342 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:42.192981 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:24:42.193081 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:42.195386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:24:42.195483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:42.196254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:42.196301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:42.200823 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:24:42.202888 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:24:42.208035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:24:42.208178 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:24:42.210654 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:24:42.214161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:24:42.241605 systemd[1]: Switching root. Dec 12 17:24:42.283128 systemd-journald[246]: Journal stopped Dec 12 17:24:43.370918 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Dec 12 17:24:43.370984 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:24:43.371000 kernel: SELinux: policy capability open_perms=1 Dec 12 17:24:43.371010 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:24:43.371022 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:24:43.371033 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:24:43.371045 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:24:43.371054 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:24:43.371064 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:24:43.371073 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:24:43.371082 kernel: audit: type=1403 audit(1765560282.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:24:43.371093 systemd[1]: Successfully loaded SELinux policy in 75.799ms. Dec 12 17:24:43.371111 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.437ms. Dec 12 17:24:43.371122 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:43.371137 systemd[1]: Detected virtualization kvm. Dec 12 17:24:43.371146 systemd[1]: Detected architecture arm64. Dec 12 17:24:43.371156 systemd[1]: Detected first boot. Dec 12 17:24:43.371167 systemd[1]: Hostname set to . Dec 12 17:24:43.371177 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:24:43.371191 zram_generator::config[1092]: No configuration found. Dec 12 17:24:43.371202 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:24:43.371212 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:24:43.371223 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:24:43.371234 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:24:43.371244 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:24:43.371254 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:24:43.371264 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:24:43.371276 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:24:43.371286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:24:43.371297 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:24:43.371307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:24:43.371318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:24:43.371331 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:24:43.371342 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:24:43.371352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:43.371363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:43.371375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:24:43.371385 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:24:43.371396 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:24:43.371406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:43.371416 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:24:43.371426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:43.371438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:43.371459 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:24:43.371471 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:24:43.371482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:43.371492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:24:43.371503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:43.371517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:43.371529 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:43.371540 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:43.371551 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:24:43.371562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:24:43.371572 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:24:43.371582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:43.371593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:43.371603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:43.371614 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:24:43.371624 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:24:43.371634 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:24:43.371646 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:24:43.371656 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:24:43.371666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:24:43.371676 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:24:43.371686 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:24:43.371697 systemd[1]: Reached target machines.target - Containers. Dec 12 17:24:43.371707 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:24:43.371717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:43.371729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:43.371740 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:24:43.371751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:43.371762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:43.371773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:43.371784 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:24:43.371794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:43.371804 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:24:43.371816 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:24:43.378873 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:24:43.378945 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:24:43.378957 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:24:43.378969 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:43.378980 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:43.378992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:43.379003 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:43.379014 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:24:43.379025 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:24:43.379035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:43.379048 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:24:43.379058 systemd[1]: Stopped verity-setup.service. Dec 12 17:24:43.379069 kernel: fuse: init (API version 7.41) Dec 12 17:24:43.379081 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:24:43.379091 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:24:43.379102 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:24:43.379112 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:24:43.379122 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:24:43.379134 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:24:43.379144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:43.379155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:24:43.379166 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:24:43.379176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:43.379186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:43.379197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:43.379207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:43.379218 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:24:43.379230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:24:43.379247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:43.379258 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:24:43.379268 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:24:43.379280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:24:43.379291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:24:43.379303 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:43.379314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:24:43.379326 kernel: loop: module loaded Dec 12 17:24:43.379337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:24:43.379347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:43.379357 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:24:43.379368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:43.379413 systemd-journald[1156]: Collecting audit messages is disabled. Dec 12 17:24:43.379477 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:24:43.379493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:43.379507 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:24:43.379519 systemd-journald[1156]: Journal started Dec 12 17:24:43.379541 systemd-journald[1156]: Runtime Journal (/run/log/journal/8f86610f8fc1457dad7b5638b5ea3d4f) is 8M, max 76.5M, 68.5M free. Dec 12 17:24:43.054399 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:24:43.080315 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 17:24:43.081071 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:24:43.385894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:43.398114 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:43.391693 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:43.391957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:43.394891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:43.397347 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:24:43.399199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:24:43.400296 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:24:43.419738 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:43.424921 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:24:43.426133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:43.446196 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:24:43.448410 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:24:43.458159 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:24:43.465377 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:24:43.467861 kernel: loop0: detected capacity change from 0 to 100632 Dec 12 17:24:43.478890 kernel: ACPI: bus type drm_connector registered Dec 12 17:24:43.491953 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:43.492208 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:43.495301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:43.512881 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:24:43.513044 systemd-journald[1156]: Time spent on flushing to /var/log/journal/8f86610f8fc1457dad7b5638b5ea3d4f is 73.848ms for 1182 entries. Dec 12 17:24:43.513044 systemd-journald[1156]: System Journal (/var/log/journal/8f86610f8fc1457dad7b5638b5ea3d4f) is 8M, max 584.8M, 576.8M free. Dec 12 17:24:43.605182 systemd-journald[1156]: Received client request to flush runtime journal. Dec 12 17:24:43.605242 kernel: loop1: detected capacity change from 0 to 207008 Dec 12 17:24:43.554366 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 12 17:24:43.554380 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 12 17:24:43.558786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:43.574748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:43.592188 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:24:43.609170 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:24:43.626988 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:24:43.642417 kernel: loop2: detected capacity change from 0 to 8 Dec 12 17:24:43.661856 kernel: loop3: detected capacity change from 0 to 119840 Dec 12 17:24:43.680891 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:24:43.690562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:43.712245 kernel: loop4: detected capacity change from 0 to 100632 Dec 12 17:24:43.737892 kernel: loop5: detected capacity change from 0 to 207008 Dec 12 17:24:43.740600 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 12 17:24:43.740622 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 12 17:24:43.753009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:43.760903 kernel: loop6: detected capacity change from 0 to 8 Dec 12 17:24:43.763040 kernel: loop7: detected capacity change from 0 to 119840 Dec 12 17:24:43.771088 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 12 17:24:43.771764 (sd-merge)[1236]: Merged extensions into '/usr'. Dec 12 17:24:43.778739 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:24:43.778944 systemd[1]: Reloading... Dec 12 17:24:43.959516 zram_generator::config[1267]: No configuration found. Dec 12 17:24:44.092892 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:24:44.147800 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:24:44.148549 systemd[1]: Reloading finished in 369 ms. Dec 12 17:24:44.180905 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:24:44.182175 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:24:44.195060 systemd[1]: Starting ensure-sysext.service... Dec 12 17:24:44.198150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:44.226038 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:24:44.226055 systemd[1]: Reloading... Dec 12 17:24:44.236019 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:24:44.236055 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:24:44.236336 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:24:44.236548 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:24:44.237546 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:24:44.237884 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Dec 12 17:24:44.237928 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Dec 12 17:24:44.242630 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:44.242646 systemd-tmpfiles[1302]: Skipping /boot Dec 12 17:24:44.252541 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:44.252558 systemd-tmpfiles[1302]: Skipping /boot Dec 12 17:24:44.295875 zram_generator::config[1329]: No configuration found. Dec 12 17:24:44.479293 systemd[1]: Reloading finished in 252 ms. Dec 12 17:24:44.502592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:24:44.510702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:44.518013 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:24:44.522606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:24:44.526082 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:24:44.530940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:44.537984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:44.544410 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:24:44.552427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:24:44.555530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:44.558567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:44.564878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:44.582149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:44.584006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:44.584178 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:44.586982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:44.588090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:44.588172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:44.591807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:44.595197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:44.597080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:44.597366 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:44.607899 systemd[1]: Finished ensure-sysext.service. Dec 12 17:24:44.615614 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:24:44.619225 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:24:44.622520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:24:44.638149 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:24:44.654596 systemd-udevd[1372]: Using default interface naming scheme 'v255'. Dec 12 17:24:44.668249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:44.671134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:44.674770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:44.675326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:44.678537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:44.693137 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:24:44.696493 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:44.696686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:44.698261 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:44.699146 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:44.705410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:44.705514 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:24:44.712421 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:24:44.719475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:44.725043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:44.728067 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:24:44.737274 augenrules[1417]: No rules Dec 12 17:24:44.746200 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:24:44.746927 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:24:44.898297 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:24:45.011262 systemd-networkd[1412]: lo: Link UP Dec 12 17:24:45.011276 systemd-networkd[1412]: lo: Gained carrier Dec 12 17:24:45.014052 systemd-networkd[1412]: Enumeration completed Dec 12 17:24:45.014729 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:45.014741 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:45.014952 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:45.018386 systemd-networkd[1412]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:45.018397 systemd-networkd[1412]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:45.019008 systemd-networkd[1412]: eth0: Link UP Dec 12 17:24:45.019127 systemd-networkd[1412]: eth0: Gained carrier Dec 12 17:24:45.019148 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:45.026450 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:24:45.032296 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:24:45.033659 systemd-networkd[1412]: eth1: Link UP Dec 12 17:24:45.036452 systemd-networkd[1412]: eth1: Gained carrier Dec 12 17:24:45.036487 systemd-networkd[1412]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:24:45.050896 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:24:45.075976 systemd-networkd[1412]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 12 17:24:45.080938 systemd-networkd[1412]: eth0: DHCPv4 address 91.99.219.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 12 17:24:45.085987 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:24:45.126581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 17:24:45.132277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:24:45.151673 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:24:45.152644 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:24:45.156777 systemd-resolved[1371]: Positive Trust Anchors: Dec 12 17:24:45.156804 systemd-resolved[1371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:45.156854 systemd-resolved[1371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:45.161374 systemd-resolved[1371]: Using system hostname 'ci-4459-2-2-4-c728b0285d'. Dec 12 17:24:45.163710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:45.165219 systemd[1]: Reached target network.target - Network. Dec 12 17:24:45.165774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:45.166885 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:24:45.167888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:24:45.169062 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:24:45.169885 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:24:45.171221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:24:45.171891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:24:45.172940 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:24:45.172977 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:24:45.173415 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:24:45.175691 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:24:45.178811 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:24:45.184613 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:24:45.187119 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:24:45.187808 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:24:45.197033 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:24:45.198648 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:24:45.202268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:24:45.204496 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:24:45.206599 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:24:45.208016 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:24:45.209236 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:24:45.209278 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:24:45.213272 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:24:45.216980 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:24:45.223260 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:24:45.228445 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:24:45.243304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:24:45.247824 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:24:45.248463 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:24:45.252254 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:24:45.259125 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:24:45.264098 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 12 17:24:45.264190 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 17:24:45.264203 kernel: [drm] features: -context_init Dec 12 17:24:45.270603 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:24:45.274128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:24:45.280518 jq[1486]: false Dec 12 17:24:45.282146 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:24:45.285035 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:24:45.296336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:24:45.300263 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:24:45.301696 coreos-metadata[1483]: Dec 12 17:24:45.301 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 12 17:24:45.303147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:24:45.304298 coreos-metadata[1483]: Dec 12 17:24:45.303 INFO Fetch successful Dec 12 17:24:45.304574 coreos-metadata[1483]: Dec 12 17:24:45.304 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 12 17:24:45.305851 coreos-metadata[1483]: Dec 12 17:24:45.305 INFO Fetch successful Dec 12 17:24:45.310936 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:24:45.313981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:24:45.314379 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:24:45.329811 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 12 17:24:45.357562 jq[1497]: true Dec 12 17:24:45.364988 extend-filesystems[1489]: Found /dev/sda6 Dec 12 17:24:45.369185 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 12 17:24:45.372501 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:24:45.373937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:24:45.393363 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:24:45.402703 update_engine[1495]: I20251212 17:24:45.400638 1495 main.cc:92] Flatcar Update Engine starting Dec 12 17:24:45.407672 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:24:45.412983 tar[1502]: linux-arm64/LICENSE Dec 12 17:24:45.414280 extend-filesystems[1489]: Found /dev/sda9 Dec 12 17:24:45.415685 tar[1502]: linux-arm64/helm Dec 12 17:24:45.427260 extend-filesystems[1489]: Checking size of /dev/sda9 Dec 12 17:24:45.430956 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:24:45.441091 kernel: [drm] number of scanouts: 1 Dec 12 17:24:45.441200 kernel: [drm] number of cap sets: 0 Dec 12 17:24:45.448190 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 12 17:24:45.457757 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 17:24:45.465772 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 17:24:45.471273 jq[1522]: true Dec 12 17:24:45.479898 dbus-daemon[1484]: [system] SELinux support is enabled Dec 12 17:24:45.480137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:24:45.483288 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:24:45.483338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:24:45.485982 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:24:45.486018 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:24:45.492477 extend-filesystems[1489]: Resized partition /dev/sda9 Dec 12 17:24:45.504691 extend-filesystems[1548]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:24:45.500387 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:24:45.506648 update_engine[1495]: I20251212 17:24:45.502307 1495 update_check_scheduler.cc:74] Next update check in 4m31s Dec 12 17:24:45.506312 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:24:45.519909 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 12 17:24:45.544271 systemd-timesyncd[1386]: Contacted time server 81.7.16.52:123 (0.flatcar.pool.ntp.org). Dec 12 17:24:45.544369 systemd-timesyncd[1386]: Initial clock synchronization to Fri 2025-12-12 17:24:45.783925 UTC. Dec 12 17:24:45.625718 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:24:45.628239 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:24:45.683494 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:24:45.689904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:24:45.690229 systemd-logind[1494]: New seat seat0. Dec 12 17:24:45.701040 systemd[1]: Starting sshkeys.service... Dec 12 17:24:45.702006 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:24:45.754811 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:24:45.760044 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:24:45.770863 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 12 17:24:45.794974 extend-filesystems[1548]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 17:24:45.794974 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 12 17:24:45.794974 extend-filesystems[1548]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 12 17:24:45.806092 extend-filesystems[1489]: Resized filesystem in /dev/sda9 Dec 12 17:24:45.811163 coreos-metadata[1576]: Dec 12 17:24:45.797 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 12 17:24:45.811163 coreos-metadata[1576]: Dec 12 17:24:45.801 INFO Fetch successful Dec 12 17:24:45.795629 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:24:45.796886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:24:45.804896 unknown[1576]: wrote ssh authorized keys file for user: core Dec 12 17:24:45.881955 systemd-logind[1494]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 12 17:24:45.886348 containerd[1523]: time="2025-12-12T17:24:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:24:45.890098 containerd[1523]: time="2025-12-12T17:24:45.889401160Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:24:45.899369 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:24:45.908523 update-ssh-keys[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:24:45.909898 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:24:45.925136 systemd[1]: Finished sshkeys.service. Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935148040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.96µs" Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935197320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935219480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935404680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935459360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935493320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935559520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:24:45.935559 containerd[1523]: time="2025-12-12T17:24:45.935573520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:24:45.936807 containerd[1523]: time="2025-12-12T17:24:45.936752560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:24:45.936807 containerd[1523]: time="2025-12-12T17:24:45.936800960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:24:45.936895 containerd[1523]: time="2025-12-12T17:24:45.936818440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:24:45.936895 containerd[1523]: time="2025-12-12T17:24:45.936839680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:24:45.937229 containerd[1523]: time="2025-12-12T17:24:45.937195000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:24:45.937895 containerd[1523]: time="2025-12-12T17:24:45.937499960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:24:45.937895 containerd[1523]: time="2025-12-12T17:24:45.937547240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:24:45.937895 containerd[1523]: time="2025-12-12T17:24:45.937559280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:24:45.937895 containerd[1523]: time="2025-12-12T17:24:45.937595480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:24:45.941087 containerd[1523]: time="2025-12-12T17:24:45.938949280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:24:45.941087 containerd[1523]: time="2025-12-12T17:24:45.939070000Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:24:45.947508 containerd[1523]: time="2025-12-12T17:24:45.947373960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947710680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947784960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947802960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947823240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947847040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947863000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947877600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947890960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947905600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947918920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.947944320Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.948114800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.948138480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:24:45.948860 containerd[1523]: time="2025-12-12T17:24:45.948155520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948169240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948187720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948199760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948212280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948223640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948241400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948254120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948265760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948563680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:24:45.949189 containerd[1523]: time="2025-12-12T17:24:45.948586360Z" level=info msg="Start snapshots syncer" Dec 12 17:24:45.949809 containerd[1523]: time="2025-12-12T17:24:45.949777360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:24:45.953138 containerd[1523]: time="2025-12-12T17:24:45.952342800Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.957974880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958108200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958317720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958350560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958363640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958380480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958400680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958456040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958477320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958522600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958540840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958558320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958606080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:24:45.959859 containerd[1523]: time="2025-12-12T17:24:45.958633080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958644720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958659880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958672400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958687400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958701400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958819280Z" level=info msg="runtime interface created" Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.958825080Z" level=info msg="created NRI interface" Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.959449360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.959480040Z" level=info msg="Connect containerd service" Dec 12 17:24:45.960215 containerd[1523]: time="2025-12-12T17:24:45.959527320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:24:45.960784 containerd[1523]: time="2025-12-12T17:24:45.960734440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:24:46.036091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:46.067852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:46.068302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:46.077074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:46.136526 locksmithd[1551]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166458006Z" level=info msg="Start subscribing containerd event" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166583581Z" level=info msg="Start recovering state" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166730745Z" level=info msg="Start event monitor" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166762056Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166774993Z" level=info msg="Start streaming server" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166785746Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166793738Z" level=info msg="runtime interface starting up..." Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166800536Z" level=info msg="starting plugins..." Dec 12 17:24:46.167338 containerd[1523]: time="2025-12-12T17:24:46.166924422Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:24:46.169844 containerd[1523]: time="2025-12-12T17:24:46.169376391Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:24:46.172329 containerd[1523]: time="2025-12-12T17:24:46.171288691Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:24:46.173054 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:24:46.175918 containerd[1523]: time="2025-12-12T17:24:46.173965154Z" level=info msg="containerd successfully booted in 0.288153s" Dec 12 17:24:46.226985 systemd-networkd[1412]: eth0: Gained IPv6LL Dec 12 17:24:46.230255 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:24:46.236594 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:24:46.242809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:46.250634 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:24:46.296469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:46.326748 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:24:46.484028 systemd-networkd[1412]: eth1: Gained IPv6LL Dec 12 17:24:46.603700 tar[1502]: linux-arm64/README.md Dec 12 17:24:46.640974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:24:47.034665 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:24:47.068002 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:24:47.071263 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:24:47.092284 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:24:47.093953 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:24:47.098931 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:24:47.126236 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:24:47.133906 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:24:47.139357 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:24:47.142085 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:24:47.255950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:47.257544 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:24:47.260961 systemd[1]: Startup finished in 2.398s (kernel) + 5.846s (initrd) + 4.853s (userspace) = 13.098s. Dec 12 17:24:47.270793 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:47.854750 kubelet[1653]: E1212 17:24:47.854601 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:47.859444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:47.859631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:47.860306 systemd[1]: kubelet.service: Consumed 906ms CPU time, 255.3M memory peak. Dec 12 17:24:50.314766 kernel: hrtimer: interrupt took 3124724 ns Dec 12 17:24:58.110484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:24:58.114437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:24:58.313373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:24:58.325951 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:24:58.377858 kubelet[1672]: E1212 17:24:58.377722 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:24:58.383051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:24:58.383287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:24:58.384306 systemd[1]: kubelet.service: Consumed 197ms CPU time, 106.5M memory peak. Dec 12 17:25:08.434334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:25:08.437607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:08.624534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:08.647055 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:08.697605 kubelet[1686]: E1212 17:25:08.697469 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:08.700811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:08.701107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:08.701871 systemd[1]: kubelet.service: Consumed 183ms CPU time, 105M memory peak. Dec 12 17:25:12.346491 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:25:12.353588 systemd[1]: Started sshd@0-91.99.219.209:22-139.178.89.65:47958.service - OpenSSH per-connection server daemon (139.178.89.65:47958). Dec 12 17:25:13.354997 sshd[1695]: Accepted publickey for core from 139.178.89.65 port 47958 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:13.358088 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:13.371717 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:25:13.374104 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:25:13.377653 systemd-logind[1494]: New session 1 of user core. Dec 12 17:25:13.396191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:25:13.399738 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:25:13.417912 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:25:13.421606 systemd-logind[1494]: New session c1 of user core. Dec 12 17:25:13.567190 systemd[1700]: Queued start job for default target default.target. Dec 12 17:25:13.572398 systemd[1700]: Created slice app.slice - User Application Slice. Dec 12 17:25:13.572482 systemd[1700]: Reached target paths.target - Paths. Dec 12 17:25:13.572551 systemd[1700]: Reached target timers.target - Timers. Dec 12 17:25:13.575982 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:25:13.588212 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:25:13.588415 systemd[1700]: Reached target sockets.target - Sockets. Dec 12 17:25:13.588478 systemd[1700]: Reached target basic.target - Basic System. Dec 12 17:25:13.588511 systemd[1700]: Reached target default.target - Main User Target. Dec 12 17:25:13.588538 systemd[1700]: Startup finished in 158ms. Dec 12 17:25:13.588934 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:25:13.598655 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:25:14.277549 systemd[1]: Started sshd@1-91.99.219.209:22-139.178.89.65:47960.service - OpenSSH per-connection server daemon (139.178.89.65:47960). Dec 12 17:25:15.258990 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 47960 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:15.262288 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:15.272981 systemd-logind[1494]: New session 2 of user core. Dec 12 17:25:15.281179 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:25:15.933915 sshd[1714]: Connection closed by 139.178.89.65 port 47960 Dec 12 17:25:15.934726 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:15.940626 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:25:15.940868 systemd[1]: sshd@1-91.99.219.209:22-139.178.89.65:47960.service: Deactivated successfully. Dec 12 17:25:15.943911 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:25:15.947003 systemd-logind[1494]: Removed session 2. Dec 12 17:25:16.109969 systemd[1]: Started sshd@2-91.99.219.209:22-139.178.89.65:47976.service - OpenSSH per-connection server daemon (139.178.89.65:47976). Dec 12 17:25:17.113060 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 47976 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:17.115621 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:17.123152 systemd-logind[1494]: New session 3 of user core. Dec 12 17:25:17.132158 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:25:17.793245 sshd[1723]: Connection closed by 139.178.89.65 port 47976 Dec 12 17:25:17.793130 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:17.801104 systemd[1]: sshd@2-91.99.219.209:22-139.178.89.65:47976.service: Deactivated successfully. Dec 12 17:25:17.804325 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:25:17.806782 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:25:17.811060 systemd-logind[1494]: Removed session 3. Dec 12 17:25:17.981100 systemd[1]: Started sshd@3-91.99.219.209:22-139.178.89.65:47984.service - OpenSSH per-connection server daemon (139.178.89.65:47984). Dec 12 17:25:18.821994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:25:18.825282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:18.994010 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 47984 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:18.996134 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:19.001180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:19.007629 systemd-logind[1494]: New session 4 of user core. Dec 12 17:25:19.017478 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:19.017517 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:25:19.072374 kubelet[1740]: E1212 17:25:19.072229 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:19.075458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:19.075627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:19.076319 systemd[1]: kubelet.service: Consumed 188ms CPU time, 104.5M memory peak. Dec 12 17:25:19.680873 sshd[1744]: Connection closed by 139.178.89.65 port 47984 Dec 12 17:25:19.680873 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:19.686333 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:25:19.687344 systemd[1]: sshd@3-91.99.219.209:22-139.178.89.65:47984.service: Deactivated successfully. Dec 12 17:25:19.689563 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:25:19.692313 systemd-logind[1494]: Removed session 4. Dec 12 17:25:19.850463 systemd[1]: Started sshd@4-91.99.219.209:22-139.178.89.65:47992.service - OpenSSH per-connection server daemon (139.178.89.65:47992). Dec 12 17:25:20.835631 sshd[1753]: Accepted publickey for core from 139.178.89.65 port 47992 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:20.838795 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:20.844882 systemd-logind[1494]: New session 5 of user core. Dec 12 17:25:20.853216 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:25:21.358236 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:25:21.358729 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:21.375955 sudo[1757]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:21.531809 sshd[1756]: Connection closed by 139.178.89.65 port 47992 Dec 12 17:25:21.533518 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:21.540070 systemd[1]: sshd@4-91.99.219.209:22-139.178.89.65:47992.service: Deactivated successfully. Dec 12 17:25:21.543872 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:25:21.545253 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:25:21.547458 systemd-logind[1494]: Removed session 5. Dec 12 17:25:21.706980 systemd[1]: Started sshd@5-91.99.219.209:22-139.178.89.65:57184.service - OpenSSH per-connection server daemon (139.178.89.65:57184). Dec 12 17:25:22.727583 sshd[1763]: Accepted publickey for core from 139.178.89.65 port 57184 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:22.731412 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:22.741120 systemd-logind[1494]: New session 6 of user core. Dec 12 17:25:22.748122 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:25:23.251352 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:25:23.251634 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:23.260936 sudo[1768]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:23.269038 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:25:23.270153 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:23.287925 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:25:23.365780 augenrules[1790]: No rules Dec 12 17:25:23.367023 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:25:23.367233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:25:23.368657 sudo[1767]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:23.529029 sshd[1766]: Connection closed by 139.178.89.65 port 57184 Dec 12 17:25:23.529423 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:23.536752 systemd[1]: sshd@5-91.99.219.209:22-139.178.89.65:57184.service: Deactivated successfully. Dec 12 17:25:23.537994 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:25:23.540144 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:25:23.545527 systemd-logind[1494]: Removed session 6. Dec 12 17:25:23.708750 systemd[1]: Started sshd@6-91.99.219.209:22-139.178.89.65:57196.service - OpenSSH per-connection server daemon (139.178.89.65:57196). Dec 12 17:25:24.726436 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 57196 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:25:24.730736 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:24.738345 systemd-logind[1494]: New session 7 of user core. Dec 12 17:25:24.744044 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:25:25.244347 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:25:25.244624 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:25.613057 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:25:25.626772 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:25:25.886994 dockerd[1822]: time="2025-12-12T17:25:25.886363876Z" level=info msg="Starting up" Dec 12 17:25:25.889504 dockerd[1822]: time="2025-12-12T17:25:25.889454000Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:25:25.904310 dockerd[1822]: time="2025-12-12T17:25:25.904230039Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:25:25.951710 dockerd[1822]: time="2025-12-12T17:25:25.951628513Z" level=info msg="Loading containers: start." Dec 12 17:25:25.966909 kernel: Initializing XFRM netlink socket Dec 12 17:25:26.264028 systemd-networkd[1412]: docker0: Link UP Dec 12 17:25:26.269670 dockerd[1822]: time="2025-12-12T17:25:26.269583306Z" level=info msg="Loading containers: done." Dec 12 17:25:26.292398 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck454845435-merged.mount: Deactivated successfully. Dec 12 17:25:26.293709 dockerd[1822]: time="2025-12-12T17:25:26.293636670Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:25:26.293856 dockerd[1822]: time="2025-12-12T17:25:26.293779034Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:25:26.293986 dockerd[1822]: time="2025-12-12T17:25:26.293958089Z" level=info msg="Initializing buildkit" Dec 12 17:25:26.329696 dockerd[1822]: time="2025-12-12T17:25:26.329574252Z" level=info msg="Completed buildkit initialization" Dec 12 17:25:26.340722 dockerd[1822]: time="2025-12-12T17:25:26.340628214Z" level=info msg="Daemon has completed initialization" Dec 12 17:25:26.342083 dockerd[1822]: time="2025-12-12T17:25:26.341215315Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:25:26.341630 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:25:27.746404 containerd[1523]: time="2025-12-12T17:25:27.745872160Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:25:28.436064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860714382.mount: Deactivated successfully. Dec 12 17:25:29.183781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 17:25:29.186826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:29.371196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:29.381211 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:29.434852 kubelet[2097]: E1212 17:25:29.434664 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:29.439082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:29.439273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:29.441064 systemd[1]: kubelet.service: Consumed 177ms CPU time, 107.1M memory peak. Dec 12 17:25:29.518612 containerd[1523]: time="2025-12-12T17:25:29.517095624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:29.518612 containerd[1523]: time="2025-12-12T17:25:29.518545966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26432057" Dec 12 17:25:29.519504 containerd[1523]: time="2025-12-12T17:25:29.519461927Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:29.522712 containerd[1523]: time="2025-12-12T17:25:29.522665489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:29.524146 containerd[1523]: time="2025-12-12T17:25:29.524102787Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.778179133s" Dec 12 17:25:29.524329 containerd[1523]: time="2025-12-12T17:25:29.524310562Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:25:29.525765 containerd[1523]: time="2025-12-12T17:25:29.525716612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:25:30.808272 containerd[1523]: time="2025-12-12T17:25:30.808215999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:30.810563 containerd[1523]: time="2025-12-12T17:25:30.810501170Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618975" Dec 12 17:25:30.811845 containerd[1523]: time="2025-12-12T17:25:30.811750922Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:30.815159 containerd[1523]: time="2025-12-12T17:25:30.815085156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:30.816867 containerd[1523]: time="2025-12-12T17:25:30.816066201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.290307498s" Dec 12 17:25:30.816867 containerd[1523]: time="2025-12-12T17:25:30.816118614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:25:30.817561 containerd[1523]: time="2025-12-12T17:25:30.817508201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:25:31.168165 update_engine[1495]: I20251212 17:25:31.167935 1495 update_attempter.cc:509] Updating boot flags... Dec 12 17:25:32.036872 containerd[1523]: time="2025-12-12T17:25:32.036549665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.042384 containerd[1523]: time="2025-12-12T17:25:32.042280401Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618456" Dec 12 17:25:32.043992 containerd[1523]: time="2025-12-12T17:25:32.043907729Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.048462 containerd[1523]: time="2025-12-12T17:25:32.048373099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:32.049930 containerd[1523]: time="2025-12-12T17:25:32.049498233Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.231942821s" Dec 12 17:25:32.049930 containerd[1523]: time="2025-12-12T17:25:32.049546404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:25:32.050196 containerd[1523]: time="2025-12-12T17:25:32.050157582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:25:33.174130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264418578.mount: Deactivated successfully. Dec 12 17:25:33.497562 containerd[1523]: time="2025-12-12T17:25:33.497496655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:33.498646 containerd[1523]: time="2025-12-12T17:25:33.498595012Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561825" Dec 12 17:25:33.499339 containerd[1523]: time="2025-12-12T17:25:33.499286240Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:33.502044 containerd[1523]: time="2025-12-12T17:25:33.501961817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:33.502911 containerd[1523]: time="2025-12-12T17:25:33.502410553Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.452207001s" Dec 12 17:25:33.502911 containerd[1523]: time="2025-12-12T17:25:33.502444801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:25:33.503126 containerd[1523]: time="2025-12-12T17:25:33.503089379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:25:34.125727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357942953.mount: Deactivated successfully. Dec 12 17:25:34.828057 containerd[1523]: time="2025-12-12T17:25:34.827936999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.829849 containerd[1523]: time="2025-12-12T17:25:34.829653911Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Dec 12 17:25:34.830616 containerd[1523]: time="2025-12-12T17:25:34.830570499Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.837594 containerd[1523]: time="2025-12-12T17:25:34.836787855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:34.837594 containerd[1523]: time="2025-12-12T17:25:34.837307881Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.334178694s" Dec 12 17:25:34.837594 containerd[1523]: time="2025-12-12T17:25:34.837385417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:25:34.838032 containerd[1523]: time="2025-12-12T17:25:34.838008865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:25:35.384766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529608523.mount: Deactivated successfully. Dec 12 17:25:35.391604 containerd[1523]: time="2025-12-12T17:25:35.391376629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:35.393092 containerd[1523]: time="2025-12-12T17:25:35.393044795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Dec 12 17:25:35.393940 containerd[1523]: time="2025-12-12T17:25:35.393889681Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:35.397865 containerd[1523]: time="2025-12-12T17:25:35.397600927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:35.398982 containerd[1523]: time="2025-12-12T17:25:35.398933108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.825782ms" Dec 12 17:25:35.399173 containerd[1523]: time="2025-12-12T17:25:35.399122545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:25:35.399881 containerd[1523]: time="2025-12-12T17:25:35.399720462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:25:36.016464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269021819.mount: Deactivated successfully. Dec 12 17:25:37.819884 containerd[1523]: time="2025-12-12T17:25:37.818483577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:37.820437 containerd[1523]: time="2025-12-12T17:25:37.820035854Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Dec 12 17:25:37.820998 containerd[1523]: time="2025-12-12T17:25:37.820949737Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:37.825623 containerd[1523]: time="2025-12-12T17:25:37.825561600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:37.826973 containerd[1523]: time="2025-12-12T17:25:37.826924243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.42699026s" Dec 12 17:25:37.827144 containerd[1523]: time="2025-12-12T17:25:37.827126679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:25:39.684533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 12 17:25:39.689080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:39.882056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:39.892528 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:39.946185 kubelet[2278]: E1212 17:25:39.946042 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:39.949350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:39.949645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:39.951946 systemd[1]: kubelet.service: Consumed 183ms CPU time, 106.6M memory peak. Dec 12 17:25:44.066791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:44.067686 systemd[1]: kubelet.service: Consumed 183ms CPU time, 106.6M memory peak. Dec 12 17:25:44.073172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:44.112978 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Dec 12 17:25:44.112997 systemd[1]: Reloading... Dec 12 17:25:44.264854 zram_generator::config[2336]: No configuration found. Dec 12 17:25:44.462676 systemd[1]: Reloading finished in 349 ms. Dec 12 17:25:44.525212 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:25:44.525300 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:25:44.525549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:44.525603 systemd[1]: kubelet.service: Consumed 122ms CPU time, 95M memory peak. Dec 12 17:25:44.528186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:44.695060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:44.709397 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:25:44.758914 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:44.759629 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:25:44.759743 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:44.759997 kubelet[2384]: I1212 17:25:44.759952 2384 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:25:45.893506 kubelet[2384]: I1212 17:25:45.893441 2384 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:25:45.893506 kubelet[2384]: I1212 17:25:45.893480 2384 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:25:45.894103 kubelet[2384]: I1212 17:25:45.893801 2384 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:25:45.936936 kubelet[2384]: E1212 17:25:45.936893 2384 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.219.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.219.209:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:25:45.939987 kubelet[2384]: I1212 17:25:45.939927 2384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:25:45.948350 kubelet[2384]: I1212 17:25:45.948258 2384 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:25:45.951759 kubelet[2384]: I1212 17:25:45.951711 2384 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:25:45.952101 kubelet[2384]: I1212 17:25:45.952045 2384 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:25:45.952320 kubelet[2384]: I1212 17:25:45.952102 2384 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-4-c728b0285d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:25:45.952540 kubelet[2384]: I1212 17:25:45.952431 2384 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:25:45.952540 kubelet[2384]: I1212 17:25:45.952440 2384 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:25:45.952720 kubelet[2384]: I1212 17:25:45.952695 2384 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:45.956506 kubelet[2384]: I1212 17:25:45.956460 2384 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:25:45.956624 kubelet[2384]: I1212 17:25:45.956605 2384 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:25:45.956664 kubelet[2384]: I1212 17:25:45.956643 2384 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:25:45.956664 kubelet[2384]: I1212 17:25:45.956660 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:25:45.959102 kubelet[2384]: W1212 17:25:45.959015 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.219.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-4-c728b0285d&limit=500&resourceVersion=0": dial tcp 91.99.219.209:6443: connect: connection refused Dec 12 17:25:45.959102 kubelet[2384]: E1212 17:25:45.959094 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.219.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-4-c728b0285d&limit=500&resourceVersion=0\": dial tcp 91.99.219.209:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:25:45.962869 kubelet[2384]: W1212 17:25:45.962317 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.219.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.219.209:6443: connect: connection refused Dec 12 17:25:45.962869 kubelet[2384]: E1212 17:25:45.962383 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.219.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.219.209:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:25:45.964793 kubelet[2384]: I1212 17:25:45.963403 2384 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:25:45.964793 kubelet[2384]: I1212 17:25:45.964132 2384 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:25:45.964793 kubelet[2384]: W1212 17:25:45.964274 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:25:45.966183 kubelet[2384]: I1212 17:25:45.966158 2384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:25:45.966367 kubelet[2384]: I1212 17:25:45.966355 2384 server.go:1287] "Started kubelet" Dec 12 17:25:45.975920 kubelet[2384]: I1212 17:25:45.975880 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:25:45.977817 kubelet[2384]: I1212 17:25:45.977763 2384 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:25:45.980112 kubelet[2384]: I1212 17:25:45.980054 2384 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:25:45.983671 kubelet[2384]: I1212 17:25:45.983573 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:25:45.983972 kubelet[2384]: I1212 17:25:45.983951 2384 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:25:45.984671 kubelet[2384]: I1212 17:25:45.984642 2384 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:25:45.985291 kubelet[2384]: E1212 17:25:45.985263 2384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-4-c728b0285d\" not found" Dec 12 17:25:45.987297 kubelet[2384]: I1212 17:25:45.987268 2384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:25:45.987528 kubelet[2384]: E1212 17:25:45.984227 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.219.209:6443/api/v1/namespaces/default/events\": dial tcp 91.99.219.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-4-c728b0285d.188087c7c2d67f08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-4-c728b0285d,UID:ci-4459-2-2-4-c728b0285d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-4-c728b0285d,},FirstTimestamp:2025-12-12 17:25:45.966313224 +0000 UTC m=+1.252204198,LastTimestamp:2025-12-12 17:25:45.966313224 +0000 UTC m=+1.252204198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-4-c728b0285d,}" Dec 12 17:25:45.987796 kubelet[2384]: I1212 17:25:45.987781 2384 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:25:45.988558 kubelet[2384]: I1212 17:25:45.988508 2384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:25:45.989634 kubelet[2384]: I1212 17:25:45.989612 2384 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:25:45.989927 kubelet[2384]: I1212 17:25:45.989900 2384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:25:45.990433 kubelet[2384]: W1212 17:25:45.990393 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.219.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.219.209:6443: connect: connection refused Dec 12 17:25:45.990727 kubelet[2384]: E1212 17:25:45.990691 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.219.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.219.209:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:25:45.991098 kubelet[2384]: E1212 17:25:45.991050 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-4-c728b0285d?timeout=10s\": dial tcp 91.99.219.209:6443: connect: connection refused" interval="200ms" Dec 12 17:25:45.993565 kubelet[2384]: I1212 17:25:45.993518 2384 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:25:46.000178 kubelet[2384]: E1212 17:25:46.000131 2384 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:25:46.002860 kubelet[2384]: I1212 17:25:46.001777 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:25:46.003207 kubelet[2384]: I1212 17:25:46.003169 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:25:46.003271 kubelet[2384]: I1212 17:25:46.003205 2384 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:25:46.003271 kubelet[2384]: I1212 17:25:46.003239 2384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:25:46.003271 kubelet[2384]: I1212 17:25:46.003246 2384 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:25:46.003344 kubelet[2384]: E1212 17:25:46.003292 2384 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:25:46.013344 kubelet[2384]: W1212 17:25:46.012015 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.219.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.219.209:6443: connect: connection refused Dec 12 17:25:46.013344 kubelet[2384]: E1212 17:25:46.012084 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.219.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.219.209:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:25:46.024675 kubelet[2384]: I1212 17:25:46.024645 2384 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:25:46.024675 kubelet[2384]: I1212 17:25:46.024669 2384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:25:46.024675 kubelet[2384]: I1212 17:25:46.024691 2384 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:46.026682 kubelet[2384]: I1212 17:25:46.026635 2384 policy_none.go:49] "None policy: Start" Dec 12 17:25:46.026682 kubelet[2384]: I1212 17:25:46.026672 2384 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:25:46.026682 kubelet[2384]: I1212 17:25:46.026687 2384 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:25:46.034487 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:25:46.050172 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:25:46.055442 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:25:46.068361 kubelet[2384]: I1212 17:25:46.067603 2384 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:25:46.068514 kubelet[2384]: I1212 17:25:46.068396 2384 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:25:46.068514 kubelet[2384]: I1212 17:25:46.068419 2384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:25:46.068861 kubelet[2384]: I1212 17:25:46.068805 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:25:46.073301 kubelet[2384]: E1212 17:25:46.073273 2384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:25:46.074270 kubelet[2384]: E1212 17:25:46.073683 2384 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-4-c728b0285d\" not found" Dec 12 17:25:46.119715 systemd[1]: Created slice kubepods-burstable-podbb5e9e80b9a6994b084d505e0e43dac0.slice - libcontainer container kubepods-burstable-podbb5e9e80b9a6994b084d505e0e43dac0.slice. Dec 12 17:25:46.143401 kubelet[2384]: E1212 17:25:46.143352 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.149563 systemd[1]: Created slice kubepods-burstable-pod205336715f443f6f23956e1d9a1009a3.slice - libcontainer container kubepods-burstable-pod205336715f443f6f23956e1d9a1009a3.slice. Dec 12 17:25:46.154219 kubelet[2384]: E1212 17:25:46.154151 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.157132 systemd[1]: Created slice kubepods-burstable-pod66ea16bb9d13feaafd0e4d9120f0a57c.slice - libcontainer container kubepods-burstable-pod66ea16bb9d13feaafd0e4d9120f0a57c.slice. Dec 12 17:25:46.159942 kubelet[2384]: E1212 17:25:46.159890 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.172042 kubelet[2384]: I1212 17:25:46.171964 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.172821 kubelet[2384]: E1212 17:25:46.172774 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.209:6443/api/v1/nodes\": dial tcp 91.99.219.209:6443: connect: connection refused" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.188496 kubelet[2384]: I1212 17:25:46.188332 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.188496 kubelet[2384]: I1212 17:25:46.188397 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.188496 kubelet[2384]: I1212 17:25:46.188426 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.192298 kubelet[2384]: E1212 17:25:46.192229 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-4-c728b0285d?timeout=10s\": dial tcp 91.99.219.209:6443: connect: connection refused" interval="400ms" Dec 12 17:25:46.289501 kubelet[2384]: I1212 17:25:46.289429 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66ea16bb9d13feaafd0e4d9120f0a57c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-4-c728b0285d\" (UID: \"66ea16bb9d13feaafd0e4d9120f0a57c\") " pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.289812 kubelet[2384]: I1212 17:25:46.289524 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.289812 kubelet[2384]: I1212 17:25:46.289566 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.289812 kubelet[2384]: I1212 17:25:46.289601 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.289812 kubelet[2384]: I1212 17:25:46.289634 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.289812 kubelet[2384]: I1212 17:25:46.289666 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.377848 kubelet[2384]: I1212 17:25:46.377662 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.378202 kubelet[2384]: E1212 17:25:46.378175 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.209:6443/api/v1/nodes\": dial tcp 91.99.219.209:6443: connect: connection refused" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.445211 containerd[1523]: time="2025-12-12T17:25:46.445124042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-4-c728b0285d,Uid:bb5e9e80b9a6994b084d505e0e43dac0,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:46.455859 containerd[1523]: time="2025-12-12T17:25:46.455682984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-4-c728b0285d,Uid:205336715f443f6f23956e1d9a1009a3,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:46.463371 containerd[1523]: time="2025-12-12T17:25:46.463329447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-4-c728b0285d,Uid:66ea16bb9d13feaafd0e4d9120f0a57c,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:46.475118 containerd[1523]: time="2025-12-12T17:25:46.475058174Z" level=info msg="connecting to shim d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340" address="unix:///run/containerd/s/a7c0e563dfe59dc6febfb6c6e88f520b613c81f110ec8ee8f741b12935ad8914" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:46.502560 containerd[1523]: time="2025-12-12T17:25:46.502467674Z" level=info msg="connecting to shim f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c" address="unix:///run/containerd/s/2800b4fb1448b0b1939f35726636c1ea4343422b819aa099796531cb6d91dfcc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:46.526909 containerd[1523]: time="2025-12-12T17:25:46.526567647Z" level=info msg="connecting to shim dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f" address="unix:///run/containerd/s/b1941e3a470eccc4cb21c70e8a26689aa4a6a32904519debad96e37be2fb139f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:46.538104 systemd[1]: Started cri-containerd-d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340.scope - libcontainer container d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340. Dec 12 17:25:46.544698 systemd[1]: Started cri-containerd-f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c.scope - libcontainer container f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c. Dec 12 17:25:46.581038 systemd[1]: Started cri-containerd-dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f.scope - libcontainer container dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f. Dec 12 17:25:46.596672 kubelet[2384]: E1212 17:25:46.596460 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.219.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-4-c728b0285d?timeout=10s\": dial tcp 91.99.219.209:6443: connect: connection refused" interval="800ms" Dec 12 17:25:46.616174 containerd[1523]: time="2025-12-12T17:25:46.616125093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-4-c728b0285d,Uid:bb5e9e80b9a6994b084d505e0e43dac0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340\"" Dec 12 17:25:46.625172 containerd[1523]: time="2025-12-12T17:25:46.624969744Z" level=info msg="CreateContainer within sandbox \"d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:25:46.632481 containerd[1523]: time="2025-12-12T17:25:46.632436344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-4-c728b0285d,Uid:66ea16bb9d13feaafd0e4d9120f0a57c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c\"" Dec 12 17:25:46.635766 containerd[1523]: time="2025-12-12T17:25:46.635717909Z" level=info msg="CreateContainer within sandbox \"f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:25:46.640772 containerd[1523]: time="2025-12-12T17:25:46.640711605Z" level=info msg="Container b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:46.653526 containerd[1523]: time="2025-12-12T17:25:46.653440895Z" level=info msg="Container df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:46.655019 containerd[1523]: time="2025-12-12T17:25:46.654848429Z" level=info msg="CreateContainer within sandbox \"d6aef24a3ed9c826ca071d38996ba59f208361c15357777c3e5c77de467c8340\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af\"" Dec 12 17:25:46.656679 containerd[1523]: time="2025-12-12T17:25:46.656616487Z" level=info msg="StartContainer for \"b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af\"" Dec 12 17:25:46.661853 containerd[1523]: time="2025-12-12T17:25:46.661716036Z" level=info msg="connecting to shim b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af" address="unix:///run/containerd/s/a7c0e563dfe59dc6febfb6c6e88f520b613c81f110ec8ee8f741b12935ad8914" protocol=ttrpc version=3 Dec 12 17:25:46.668749 containerd[1523]: time="2025-12-12T17:25:46.668600445Z" level=info msg="CreateContainer within sandbox \"f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec\"" Dec 12 17:25:46.669616 containerd[1523]: time="2025-12-12T17:25:46.669568644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-4-c728b0285d,Uid:205336715f443f6f23956e1d9a1009a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f\"" Dec 12 17:25:46.671543 containerd[1523]: time="2025-12-12T17:25:46.671150439Z" level=info msg="StartContainer for \"df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec\"" Dec 12 17:25:46.672524 containerd[1523]: time="2025-12-12T17:25:46.672490725Z" level=info msg="CreateContainer within sandbox \"dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:25:46.673210 containerd[1523]: time="2025-12-12T17:25:46.673168208Z" level=info msg="connecting to shim df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec" address="unix:///run/containerd/s/2800b4fb1448b0b1939f35726636c1ea4343422b819aa099796531cb6d91dfcc" protocol=ttrpc version=3 Dec 12 17:25:46.687392 containerd[1523]: time="2025-12-12T17:25:46.687345797Z" level=info msg="Container e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:46.701073 systemd[1]: Started cri-containerd-b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af.scope - libcontainer container b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af. Dec 12 17:25:46.706555 containerd[1523]: time="2025-12-12T17:25:46.706490958Z" level=info msg="CreateContainer within sandbox \"dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c\"" Dec 12 17:25:46.707281 containerd[1523]: time="2025-12-12T17:25:46.707246651Z" level=info msg="StartContainer for \"e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c\"" Dec 12 17:25:46.709320 containerd[1523]: time="2025-12-12T17:25:46.709263980Z" level=info msg="connecting to shim e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c" address="unix:///run/containerd/s/b1941e3a470eccc4cb21c70e8a26689aa4a6a32904519debad96e37be2fb139f" protocol=ttrpc version=3 Dec 12 17:25:46.710169 systemd[1]: Started cri-containerd-df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec.scope - libcontainer container df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec. Dec 12 17:25:46.743289 systemd[1]: Started cri-containerd-e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c.scope - libcontainer container e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c. Dec 12 17:25:46.775956 containerd[1523]: time="2025-12-12T17:25:46.775903159Z" level=info msg="StartContainer for \"b4c0fa637d668dfcbd9171c4b719c192c9d5594893b157cefdaec6b11fb833af\" returns successfully" Dec 12 17:25:46.781982 kubelet[2384]: I1212 17:25:46.781938 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.784184 kubelet[2384]: E1212 17:25:46.784134 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.219.209:6443/api/v1/nodes\": dial tcp 91.99.219.209:6443: connect: connection refused" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:46.794968 containerd[1523]: time="2025-12-12T17:25:46.794848496Z" level=info msg="StartContainer for \"df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec\" returns successfully" Dec 12 17:25:46.868161 containerd[1523]: time="2025-12-12T17:25:46.868057286Z" level=info msg="StartContainer for \"e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c\" returns successfully" Dec 12 17:25:47.028038 kubelet[2384]: E1212 17:25:47.027922 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:47.032846 kubelet[2384]: E1212 17:25:47.032777 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:47.034860 kubelet[2384]: E1212 17:25:47.034700 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:47.586965 kubelet[2384]: I1212 17:25:47.586932 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:48.036433 kubelet[2384]: E1212 17:25:48.036396 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:48.038868 kubelet[2384]: E1212 17:25:48.037990 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.038752 kubelet[2384]: E1212 17:25:49.038718 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.039143 kubelet[2384]: E1212 17:25:49.039085 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.254037 kubelet[2384]: E1212 17:25:49.253811 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-4-c728b0285d\" not found" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.407376 kubelet[2384]: I1212 17:25:49.406646 2384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.407376 kubelet[2384]: E1212 17:25:49.406718 2384 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-4-c728b0285d\": node \"ci-4459-2-2-4-c728b0285d\" not found" Dec 12 17:25:49.447602 kubelet[2384]: E1212 17:25:49.447450 2384 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459-2-2-4-c728b0285d.188087c7c2d67f08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-4-c728b0285d,UID:ci-4459-2-2-4-c728b0285d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-4-c728b0285d,},FirstTimestamp:2025-12-12 17:25:45.966313224 +0000 UTC m=+1.252204198,LastTimestamp:2025-12-12 17:25:45.966313224 +0000 UTC m=+1.252204198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-4-c728b0285d,}" Dec 12 17:25:49.488153 kubelet[2384]: I1212 17:25:49.488105 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.502567 kubelet[2384]: E1212 17:25:49.502510 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.502567 kubelet[2384]: I1212 17:25:49.502561 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.506255 kubelet[2384]: E1212 17:25:49.506199 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.506255 kubelet[2384]: I1212 17:25:49.506243 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.510123 kubelet[2384]: E1212 17:25:49.510061 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-4-c728b0285d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:49.961858 kubelet[2384]: I1212 17:25:49.961716 2384 apiserver.go:52] "Watching apiserver" Dec 12 17:25:49.988450 kubelet[2384]: I1212 17:25:49.988334 2384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:25:51.270362 kubelet[2384]: I1212 17:25:51.270297 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:51.665984 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-7.scope)... Dec 12 17:25:51.666532 systemd[1]: Reloading... Dec 12 17:25:51.781871 zram_generator::config[2696]: No configuration found. Dec 12 17:25:52.020048 systemd[1]: Reloading finished in 352 ms. Dec 12 17:25:52.046646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:52.066578 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:25:52.067279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:52.067939 systemd[1]: kubelet.service: Consumed 1.789s CPU time, 130.7M memory peak. Dec 12 17:25:52.074646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:52.276403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:52.293314 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:25:52.368064 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:52.368064 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:25:52.368064 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:25:52.368064 kubelet[2742]: I1212 17:25:52.367864 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:25:52.387154 kubelet[2742]: I1212 17:25:52.387073 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:25:52.387154 kubelet[2742]: I1212 17:25:52.387136 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:25:52.387654 kubelet[2742]: I1212 17:25:52.387604 2742 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:25:52.389737 kubelet[2742]: I1212 17:25:52.389569 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:25:52.393861 kubelet[2742]: I1212 17:25:52.393047 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:25:52.402771 kubelet[2742]: I1212 17:25:52.402739 2742 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:25:52.411755 kubelet[2742]: I1212 17:25:52.409819 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:25:52.413200 kubelet[2742]: I1212 17:25:52.413017 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:25:52.413917 kubelet[2742]: I1212 17:25:52.413092 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-4-c728b0285d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:25:52.413917 kubelet[2742]: I1212 17:25:52.413481 2742 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:25:52.413917 kubelet[2742]: I1212 17:25:52.413497 2742 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:25:52.413917 kubelet[2742]: I1212 17:25:52.413565 2742 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:52.413917 kubelet[2742]: I1212 17:25:52.413761 2742 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:25:52.414298 kubelet[2742]: I1212 17:25:52.413780 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:25:52.414298 kubelet[2742]: I1212 17:25:52.413850 2742 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:25:52.414298 kubelet[2742]: I1212 17:25:52.413868 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:25:52.418065 kubelet[2742]: I1212 17:25:52.418014 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:25:52.419047 kubelet[2742]: I1212 17:25:52.419021 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:25:52.421204 kubelet[2742]: I1212 17:25:52.421176 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:25:52.421396 kubelet[2742]: I1212 17:25:52.421385 2742 server.go:1287] "Started kubelet" Dec 12 17:25:52.425470 kubelet[2742]: I1212 17:25:52.425422 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:25:52.441023 kubelet[2742]: I1212 17:25:52.440948 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:25:52.446033 kubelet[2742]: I1212 17:25:52.444288 2742 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:25:52.450913 kubelet[2742]: I1212 17:25:52.444529 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:25:52.475610 kubelet[2742]: I1212 17:25:52.444995 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:25:52.475610 kubelet[2742]: I1212 17:25:52.447257 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:25:52.476463 kubelet[2742]: I1212 17:25:52.447273 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:25:52.476463 kubelet[2742]: E1212 17:25:52.447500 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-4-c728b0285d\" not found" Dec 12 17:25:52.476463 kubelet[2742]: I1212 17:25:52.457552 2742 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:25:52.477180 kubelet[2742]: I1212 17:25:52.477119 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:25:52.478111 kubelet[2742]: I1212 17:25:52.477991 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:25:52.479553 kubelet[2742]: I1212 17:25:52.479117 2742 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:25:52.487537 kubelet[2742]: I1212 17:25:52.487493 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:25:52.490391 kubelet[2742]: I1212 17:25:52.490355 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:25:52.490856 kubelet[2742]: I1212 17:25:52.490513 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:25:52.490856 kubelet[2742]: I1212 17:25:52.490545 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:25:52.490856 kubelet[2742]: I1212 17:25:52.490553 2742 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:25:52.490856 kubelet[2742]: E1212 17:25:52.490607 2742 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:25:52.494961 kubelet[2742]: I1212 17:25:52.494916 2742 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:25:52.506525 kubelet[2742]: E1212 17:25:52.505278 2742 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:25:52.562615 kubelet[2742]: I1212 17:25:52.562411 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:25:52.562615 kubelet[2742]: I1212 17:25:52.562440 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:25:52.562615 kubelet[2742]: I1212 17:25:52.562478 2742 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:25:52.564151 kubelet[2742]: I1212 17:25:52.564106 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:25:52.564151 kubelet[2742]: I1212 17:25:52.564137 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:25:52.564151 kubelet[2742]: I1212 17:25:52.564161 2742 policy_none.go:49] "None policy: Start" Dec 12 17:25:52.564367 kubelet[2742]: I1212 17:25:52.564171 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:25:52.564367 kubelet[2742]: I1212 17:25:52.564183 2742 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:25:52.564367 kubelet[2742]: I1212 17:25:52.564312 2742 state_mem.go:75] "Updated machine memory state" Dec 12 17:25:52.569627 kubelet[2742]: I1212 17:25:52.569567 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:25:52.569823 kubelet[2742]: I1212 17:25:52.569797 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:25:52.570497 kubelet[2742]: I1212 17:25:52.570316 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:25:52.571581 kubelet[2742]: I1212 17:25:52.571549 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:25:52.573566 kubelet[2742]: E1212 17:25:52.572443 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:25:52.592148 kubelet[2742]: I1212 17:25:52.592044 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.592935 kubelet[2742]: I1212 17:25:52.592553 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.594341 kubelet[2742]: I1212 17:25:52.594047 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.606462 kubelet[2742]: E1212 17:25:52.606247 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.660349 sudo[2774]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:25:52.660641 sudo[2774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:25:52.678138 kubelet[2742]: I1212 17:25:52.677265 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.680806 kubelet[2742]: I1212 17:25:52.680738 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.680806 kubelet[2742]: I1212 17:25:52.680862 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681107 kubelet[2742]: I1212 17:25:52.680889 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681377 kubelet[2742]: I1212 17:25:52.681332 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681456 kubelet[2742]: I1212 17:25:52.681405 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681456 kubelet[2742]: I1212 17:25:52.681424 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb5e9e80b9a6994b084d505e0e43dac0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" (UID: \"bb5e9e80b9a6994b084d505e0e43dac0\") " pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681574 kubelet[2742]: I1212 17:25:52.681481 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681574 kubelet[2742]: I1212 17:25:52.681501 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205336715f443f6f23956e1d9a1009a3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-4-c728b0285d\" (UID: \"205336715f443f6f23956e1d9a1009a3\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.681732 kubelet[2742]: I1212 17:25:52.681638 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66ea16bb9d13feaafd0e4d9120f0a57c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-4-c728b0285d\" (UID: \"66ea16bb9d13feaafd0e4d9120f0a57c\") " pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.697757 kubelet[2742]: I1212 17:25:52.697514 2742 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:52.697757 kubelet[2742]: I1212 17:25:52.697605 2742 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-4-c728b0285d" Dec 12 17:25:53.073556 sudo[2774]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:53.428200 kubelet[2742]: I1212 17:25:53.427897 2742 apiserver.go:52] "Watching apiserver" Dec 12 17:25:53.477923 kubelet[2742]: I1212 17:25:53.476601 2742 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:25:53.545243 kubelet[2742]: I1212 17:25:53.545197 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:53.546025 kubelet[2742]: I1212 17:25:53.545916 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:53.560110 kubelet[2742]: E1212 17:25:53.559585 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-4-c728b0285d\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:53.563918 kubelet[2742]: E1212 17:25:53.563855 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-4-c728b0285d\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" Dec 12 17:25:53.612542 kubelet[2742]: I1212 17:25:53.612433 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-4-c728b0285d" podStartSLOduration=1.61240849 podStartE2EDuration="1.61240849s" podCreationTimestamp="2025-12-12 17:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:53.592344646 +0000 UTC m=+1.290774953" watchObservedRunningTime="2025-12-12 17:25:53.61240849 +0000 UTC m=+1.310838797" Dec 12 17:25:53.634115 kubelet[2742]: I1212 17:25:53.633598 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-4-c728b0285d" podStartSLOduration=1.633567882 podStartE2EDuration="1.633567882s" podCreationTimestamp="2025-12-12 17:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:53.615283852 +0000 UTC m=+1.313714159" watchObservedRunningTime="2025-12-12 17:25:53.633567882 +0000 UTC m=+1.331998229" Dec 12 17:25:54.917918 sudo[1803]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:55.075220 sshd[1802]: Connection closed by 139.178.89.65 port 57196 Dec 12 17:25:55.076268 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:55.085398 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:25:55.086375 systemd[1]: sshd@6-91.99.219.209:22-139.178.89.65:57196.service: Deactivated successfully. Dec 12 17:25:55.092137 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:25:55.092680 systemd[1]: session-7.scope: Consumed 8.126s CPU time, 261.5M memory peak. Dec 12 17:25:55.095753 systemd-logind[1494]: Removed session 7. Dec 12 17:25:56.622927 kubelet[2742]: I1212 17:25:56.622899 2742 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:25:56.623778 containerd[1523]: time="2025-12-12T17:25:56.623670081Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:25:56.624195 kubelet[2742]: I1212 17:25:56.624014 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:25:56.845446 kubelet[2742]: I1212 17:25:56.844894 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-4-c728b0285d" podStartSLOduration=5.844868399 podStartE2EDuration="5.844868399s" podCreationTimestamp="2025-12-12 17:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:53.635573518 +0000 UTC m=+1.334003825" watchObservedRunningTime="2025-12-12 17:25:56.844868399 +0000 UTC m=+4.543298746" Dec 12 17:25:57.610587 systemd[1]: Created slice kubepods-besteffort-podbf871960_7e35_4c3f_ae48_ece783dd2469.slice - libcontainer container kubepods-besteffort-podbf871960_7e35_4c3f_ae48_ece783dd2469.slice. Dec 12 17:25:57.615204 kubelet[2742]: I1212 17:25:57.615146 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g894x\" (UniqueName: \"kubernetes.io/projected/bf871960-7e35-4c3f-ae48-ece783dd2469-kube-api-access-g894x\") pod \"kube-proxy-hcc8q\" (UID: \"bf871960-7e35-4c3f-ae48-ece783dd2469\") " pod="kube-system/kube-proxy-hcc8q" Dec 12 17:25:57.615979 kubelet[2742]: I1212 17:25:57.615926 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf871960-7e35-4c3f-ae48-ece783dd2469-kube-proxy\") pod \"kube-proxy-hcc8q\" (UID: \"bf871960-7e35-4c3f-ae48-ece783dd2469\") " pod="kube-system/kube-proxy-hcc8q" Dec 12 17:25:57.616136 kubelet[2742]: I1212 17:25:57.616014 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf871960-7e35-4c3f-ae48-ece783dd2469-xtables-lock\") pod \"kube-proxy-hcc8q\" (UID: \"bf871960-7e35-4c3f-ae48-ece783dd2469\") " pod="kube-system/kube-proxy-hcc8q" Dec 12 17:25:57.616136 kubelet[2742]: I1212 17:25:57.616037 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf871960-7e35-4c3f-ae48-ece783dd2469-lib-modules\") pod \"kube-proxy-hcc8q\" (UID: \"bf871960-7e35-4c3f-ae48-ece783dd2469\") " pod="kube-system/kube-proxy-hcc8q" Dec 12 17:25:57.626443 systemd[1]: Created slice kubepods-burstable-pod809d2154_cff2_4bf3_ba53_5676f5eddbb6.slice - libcontainer container kubepods-burstable-pod809d2154_cff2_4bf3_ba53_5676f5eddbb6.slice. Dec 12 17:25:57.692338 systemd[1]: Created slice kubepods-besteffort-pod6525ada2_da67_4802_9c8b_74bc16d633e3.slice - libcontainer container kubepods-besteffort-pod6525ada2_da67_4802_9c8b_74bc16d633e3.slice. Dec 12 17:25:57.699651 kubelet[2742]: I1212 17:25:57.699588 2742 status_manager.go:890] "Failed to get status for pod" podUID="6525ada2-da67-4802-9c8b-74bc16d633e3" pod="kube-system/cilium-operator-6c4d7847fc-95xns" err="pods \"cilium-operator-6c4d7847fc-95xns\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" Dec 12 17:25:57.716658 kubelet[2742]: I1212 17:25:57.716567 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cni-path\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.716658 kubelet[2742]: I1212 17:25:57.716612 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-net\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.716939 kubelet[2742]: I1212 17:25:57.716910 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddkb\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-kube-api-access-6ddkb\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717026 kubelet[2742]: I1212 17:25:57.716956 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4mc2\" (UniqueName: \"kubernetes.io/projected/6525ada2-da67-4802-9c8b-74bc16d633e3-kube-api-access-d4mc2\") pod \"cilium-operator-6c4d7847fc-95xns\" (UID: \"6525ada2-da67-4802-9c8b-74bc16d633e3\") " pod="kube-system/cilium-operator-6c4d7847fc-95xns" Dec 12 17:25:57.717026 kubelet[2742]: I1212 17:25:57.717015 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-etc-cni-netd\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717041 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-kernel\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717058 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hubble-tls\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717093 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/809d2154-cff2-4bf3-ba53-5676f5eddbb6-clustermesh-secrets\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717124 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-run\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717144 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-xtables-lock\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717180 kubelet[2742]: I1212 17:25:57.717173 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-config-path\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717305 kubelet[2742]: I1212 17:25:57.717192 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-cgroup\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717305 kubelet[2742]: I1212 17:25:57.717207 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-lib-modules\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717305 kubelet[2742]: I1212 17:25:57.717235 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-bpf-maps\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717305 kubelet[2742]: I1212 17:25:57.717254 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hostproc\") pod \"cilium-jdqq7\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " pod="kube-system/cilium-jdqq7" Dec 12 17:25:57.717305 kubelet[2742]: I1212 17:25:57.717269 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6525ada2-da67-4802-9c8b-74bc16d633e3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-95xns\" (UID: \"6525ada2-da67-4802-9c8b-74bc16d633e3\") " pod="kube-system/cilium-operator-6c4d7847fc-95xns" Dec 12 17:25:57.924926 containerd[1523]: time="2025-12-12T17:25:57.924196313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcc8q,Uid:bf871960-7e35-4c3f-ae48-ece783dd2469,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:57.933533 containerd[1523]: time="2025-12-12T17:25:57.932647174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdqq7,Uid:809d2154-cff2-4bf3-ba53-5676f5eddbb6,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:57.959775 containerd[1523]: time="2025-12-12T17:25:57.959110817Z" level=info msg="connecting to shim c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d" address="unix:///run/containerd/s/ebb2739ec155080a3a13b454b93b25bc4c1e412c9cebd904a00e30084d85394d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:57.977514 containerd[1523]: time="2025-12-12T17:25:57.977417383Z" level=info msg="connecting to shim 6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:57.999420 containerd[1523]: time="2025-12-12T17:25:57.999184813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95xns,Uid:6525ada2-da67-4802-9c8b-74bc16d633e3,Namespace:kube-system,Attempt:0,}" Dec 12 17:25:58.002154 systemd[1]: Started cri-containerd-c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d.scope - libcontainer container c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d. Dec 12 17:25:58.024153 systemd[1]: Started cri-containerd-6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b.scope - libcontainer container 6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b. Dec 12 17:25:58.042715 containerd[1523]: time="2025-12-12T17:25:58.041472997Z" level=info msg="connecting to shim 057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa" address="unix:///run/containerd/s/e35b5f06b88aba20762411cb4f04095c8afdcefd4ba7215051330b1128d15fd9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:25:58.068748 containerd[1523]: time="2025-12-12T17:25:58.068699847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcc8q,Uid:bf871960-7e35-4c3f-ae48-ece783dd2469,Namespace:kube-system,Attempt:0,} returns sandbox id \"c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d\"" Dec 12 17:25:58.076610 containerd[1523]: time="2025-12-12T17:25:58.076550759Z" level=info msg="CreateContainer within sandbox \"c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:25:58.083929 containerd[1523]: time="2025-12-12T17:25:58.083884307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdqq7,Uid:809d2154-cff2-4bf3-ba53-5676f5eddbb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\"" Dec 12 17:25:58.091368 systemd[1]: Started cri-containerd-057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa.scope - libcontainer container 057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa. Dec 12 17:25:58.092031 containerd[1523]: time="2025-12-12T17:25:58.091987081Z" level=info msg="Container 5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:25:58.095254 containerd[1523]: time="2025-12-12T17:25:58.095174594Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:25:58.108482 containerd[1523]: time="2025-12-12T17:25:58.108384364Z" level=info msg="CreateContainer within sandbox \"c70ffb607e8d9f4c3cab2c797df25867ff596a9e712b75bc126334374157a37d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6\"" Dec 12 17:25:58.109357 containerd[1523]: time="2025-12-12T17:25:58.109312844Z" level=info msg="StartContainer for \"5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6\"" Dec 12 17:25:58.111586 containerd[1523]: time="2025-12-12T17:25:58.111527393Z" level=info msg="connecting to shim 5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6" address="unix:///run/containerd/s/ebb2739ec155080a3a13b454b93b25bc4c1e412c9cebd904a00e30084d85394d" protocol=ttrpc version=3 Dec 12 17:25:58.137301 systemd[1]: Started cri-containerd-5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6.scope - libcontainer container 5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6. Dec 12 17:25:58.159546 containerd[1523]: time="2025-12-12T17:25:58.159375929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-95xns,Uid:6525ada2-da67-4802-9c8b-74bc16d633e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\"" Dec 12 17:25:58.236960 containerd[1523]: time="2025-12-12T17:25:58.236795316Z" level=info msg="StartContainer for \"5970d104b52b857475c7832d952d9e61b5b24f0f7b195f6fa9019101a44ec1a6\" returns successfully" Dec 12 17:25:59.561250 kubelet[2742]: I1212 17:25:59.561106 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcc8q" podStartSLOduration=2.561060281 podStartE2EDuration="2.561060281s" podCreationTimestamp="2025-12-12 17:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:25:58.595025901 +0000 UTC m=+6.293456208" watchObservedRunningTime="2025-12-12 17:25:59.561060281 +0000 UTC m=+7.259490588" Dec 12 17:26:02.011577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321603171.mount: Deactivated successfully. Dec 12 17:26:03.552346 containerd[1523]: time="2025-12-12T17:26:03.551286528Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:03.552346 containerd[1523]: time="2025-12-12T17:26:03.552165475Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:26:03.553440 containerd[1523]: time="2025-12-12T17:26:03.553382529Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:03.555278 containerd[1523]: time="2025-12-12T17:26:03.555222590Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.459961469s" Dec 12 17:26:03.555551 containerd[1523]: time="2025-12-12T17:26:03.555520333Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:26:03.557300 containerd[1523]: time="2025-12-12T17:26:03.557263666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:26:03.562850 containerd[1523]: time="2025-12-12T17:26:03.562456625Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:26:03.578425 containerd[1523]: time="2025-12-12T17:26:03.578374245Z" level=info msg="Container 4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:03.581380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459016529.mount: Deactivated successfully. Dec 12 17:26:03.590186 containerd[1523]: time="2025-12-12T17:26:03.590042540Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\"" Dec 12 17:26:03.592126 containerd[1523]: time="2025-12-12T17:26:03.592085977Z" level=info msg="StartContainer for \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\"" Dec 12 17:26:03.593786 containerd[1523]: time="2025-12-12T17:26:03.593737664Z" level=info msg="connecting to shim 4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" protocol=ttrpc version=3 Dec 12 17:26:03.622137 systemd[1]: Started cri-containerd-4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255.scope - libcontainer container 4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255. Dec 12 17:26:03.662634 containerd[1523]: time="2025-12-12T17:26:03.662560422Z" level=info msg="StartContainer for \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\" returns successfully" Dec 12 17:26:03.685025 systemd[1]: cri-containerd-4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255.scope: Deactivated successfully. Dec 12 17:26:03.690298 containerd[1523]: time="2025-12-12T17:26:03.690232825Z" level=info msg="received container exit event container_id:\"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\" id:\"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\" pid:3152 exited_at:{seconds:1765560363 nanos:689640659}" Dec 12 17:26:03.715795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255-rootfs.mount: Deactivated successfully. Dec 12 17:26:04.590789 containerd[1523]: time="2025-12-12T17:26:04.590711991Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:26:04.612102 containerd[1523]: time="2025-12-12T17:26:04.610852466Z" level=info msg="Container 02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:04.623064 containerd[1523]: time="2025-12-12T17:26:04.623015981Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\"" Dec 12 17:26:04.625776 containerd[1523]: time="2025-12-12T17:26:04.625690942Z" level=info msg="StartContainer for \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\"" Dec 12 17:26:04.628471 containerd[1523]: time="2025-12-12T17:26:04.628413307Z" level=info msg="connecting to shim 02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" protocol=ttrpc version=3 Dec 12 17:26:04.666134 systemd[1]: Started cri-containerd-02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83.scope - libcontainer container 02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83. Dec 12 17:26:04.709519 containerd[1523]: time="2025-12-12T17:26:04.709466166Z" level=info msg="StartContainer for \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\" returns successfully" Dec 12 17:26:04.729252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:26:04.729868 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:04.730317 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:04.734630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:26:04.738090 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:26:04.741147 systemd[1]: cri-containerd-02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83.scope: Deactivated successfully. Dec 12 17:26:04.744805 containerd[1523]: time="2025-12-12T17:26:04.744759501Z" level=info msg="received container exit event container_id:\"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\" id:\"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\" pid:3198 exited_at:{seconds:1765560364 nanos:744463599}" Dec 12 17:26:04.772473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:26:05.606165 containerd[1523]: time="2025-12-12T17:26:05.605502313Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:26:05.612181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83-rootfs.mount: Deactivated successfully. Dec 12 17:26:05.647101 containerd[1523]: time="2025-12-12T17:26:05.647049062Z" level=info msg="Container 912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:05.679443 containerd[1523]: time="2025-12-12T17:26:05.678393778Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\"" Dec 12 17:26:05.684673 containerd[1523]: time="2025-12-12T17:26:05.684579595Z" level=info msg="StartContainer for \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\"" Dec 12 17:26:05.687362 containerd[1523]: time="2025-12-12T17:26:05.687315397Z" level=info msg="connecting to shim 912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" protocol=ttrpc version=3 Dec 12 17:26:05.733287 systemd[1]: Started cri-containerd-912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d.scope - libcontainer container 912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d. Dec 12 17:26:05.806867 containerd[1523]: time="2025-12-12T17:26:05.806325468Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:05.809252 containerd[1523]: time="2025-12-12T17:26:05.809178559Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:26:05.810512 containerd[1523]: time="2025-12-12T17:26:05.810462693Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:05.815861 containerd[1523]: time="2025-12-12T17:26:05.815771646Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.258207396s" Dec 12 17:26:05.816232 containerd[1523]: time="2025-12-12T17:26:05.816062307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:26:05.824132 containerd[1523]: time="2025-12-12T17:26:05.824028976Z" level=info msg="CreateContainer within sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:26:05.832471 containerd[1523]: time="2025-12-12T17:26:05.832349590Z" level=info msg="StartContainer for \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\" returns successfully" Dec 12 17:26:05.837434 systemd[1]: cri-containerd-912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d.scope: Deactivated successfully. Dec 12 17:26:05.843313 containerd[1523]: time="2025-12-12T17:26:05.842825084Z" level=info msg="received container exit event container_id:\"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\" id:\"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\" pid:3256 exited_at:{seconds:1765560365 nanos:842421894}" Dec 12 17:26:05.847618 containerd[1523]: time="2025-12-12T17:26:05.847029155Z" level=info msg="Container bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:05.858284 containerd[1523]: time="2025-12-12T17:26:05.857803591Z" level=info msg="CreateContainer within sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\"" Dec 12 17:26:05.859898 containerd[1523]: time="2025-12-12T17:26:05.859160451Z" level=info msg="StartContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\"" Dec 12 17:26:05.860456 containerd[1523]: time="2025-12-12T17:26:05.860396142Z" level=info msg="connecting to shim bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665" address="unix:///run/containerd/s/e35b5f06b88aba20762411cb4f04095c8afdcefd4ba7215051330b1128d15fd9" protocol=ttrpc version=3 Dec 12 17:26:05.884277 systemd[1]: Started cri-containerd-bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665.scope - libcontainer container bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665. Dec 12 17:26:05.938886 containerd[1523]: time="2025-12-12T17:26:05.938553715Z" level=info msg="StartContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" returns successfully" Dec 12 17:26:06.614204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d-rootfs.mount: Deactivated successfully. Dec 12 17:26:06.622133 containerd[1523]: time="2025-12-12T17:26:06.622092450Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:26:06.646341 containerd[1523]: time="2025-12-12T17:26:06.646284846Z" level=info msg="Container 43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:06.674007 containerd[1523]: time="2025-12-12T17:26:06.673919092Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\"" Dec 12 17:26:06.676140 containerd[1523]: time="2025-12-12T17:26:06.675358716Z" level=info msg="StartContainer for \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\"" Dec 12 17:26:06.678817 containerd[1523]: time="2025-12-12T17:26:06.678150879Z" level=info msg="connecting to shim 43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" protocol=ttrpc version=3 Dec 12 17:26:06.723082 systemd[1]: Started cri-containerd-43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095.scope - libcontainer container 43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095. Dec 12 17:26:06.778946 kubelet[2742]: I1212 17:26:06.778806 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-95xns" podStartSLOduration=2.123009545 podStartE2EDuration="9.778775103s" podCreationTimestamp="2025-12-12 17:25:57 +0000 UTC" firstStartedPulling="2025-12-12 17:25:58.162136005 +0000 UTC m=+5.860566312" lastFinishedPulling="2025-12-12 17:26:05.817901563 +0000 UTC m=+13.516331870" observedRunningTime="2025-12-12 17:26:06.775612713 +0000 UTC m=+14.474043020" watchObservedRunningTime="2025-12-12 17:26:06.778775103 +0000 UTC m=+14.477205610" Dec 12 17:26:06.815977 systemd[1]: cri-containerd-43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095.scope: Deactivated successfully. Dec 12 17:26:06.820472 containerd[1523]: time="2025-12-12T17:26:06.820296757Z" level=info msg="received container exit event container_id:\"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\" id:\"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\" pid:3331 exited_at:{seconds:1765560366 nanos:819141913}" Dec 12 17:26:06.824107 containerd[1523]: time="2025-12-12T17:26:06.824067190Z" level=info msg="StartContainer for \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\" returns successfully" Dec 12 17:26:07.612996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095-rootfs.mount: Deactivated successfully. Dec 12 17:26:07.626597 containerd[1523]: time="2025-12-12T17:26:07.626290314Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:26:07.655340 containerd[1523]: time="2025-12-12T17:26:07.654424002Z" level=info msg="Container 06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:07.665389 containerd[1523]: time="2025-12-12T17:26:07.665339101Z" level=info msg="CreateContainer within sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\"" Dec 12 17:26:07.666418 containerd[1523]: time="2025-12-12T17:26:07.666372455Z" level=info msg="StartContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\"" Dec 12 17:26:07.672483 containerd[1523]: time="2025-12-12T17:26:07.672398285Z" level=info msg="connecting to shim 06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c" address="unix:///run/containerd/s/27592abc5733b61cde1908a08253b01b7925b8800f2f1e72d5b0beb3df6946bd" protocol=ttrpc version=3 Dec 12 17:26:07.712084 systemd[1]: Started cri-containerd-06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c.scope - libcontainer container 06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c. Dec 12 17:26:07.758234 containerd[1523]: time="2025-12-12T17:26:07.758175688Z" level=info msg="StartContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" returns successfully" Dec 12 17:26:07.903062 kubelet[2742]: I1212 17:26:07.901658 2742 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:26:07.955541 systemd[1]: Created slice kubepods-burstable-pod06fa718c_55cb_4e4e_b5df_4ba1848de957.slice - libcontainer container kubepods-burstable-pod06fa718c_55cb_4e4e_b5df_4ba1848de957.slice. Dec 12 17:26:07.967774 systemd[1]: Created slice kubepods-burstable-pod6472f4fc_a85b_439e_a967_e984795bc3cd.slice - libcontainer container kubepods-burstable-pod6472f4fc_a85b_439e_a967_e984795bc3cd.slice. Dec 12 17:26:07.997991 kubelet[2742]: I1212 17:26:07.997754 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99c9n\" (UniqueName: \"kubernetes.io/projected/06fa718c-55cb-4e4e-b5df-4ba1848de957-kube-api-access-99c9n\") pod \"coredns-668d6bf9bc-zl8l8\" (UID: \"06fa718c-55cb-4e4e-b5df-4ba1848de957\") " pod="kube-system/coredns-668d6bf9bc-zl8l8" Dec 12 17:26:07.998378 kubelet[2742]: I1212 17:26:07.998235 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7tdh\" (UniqueName: \"kubernetes.io/projected/6472f4fc-a85b-439e-a967-e984795bc3cd-kube-api-access-d7tdh\") pod \"coredns-668d6bf9bc-ldrv4\" (UID: \"6472f4fc-a85b-439e-a967-e984795bc3cd\") " pod="kube-system/coredns-668d6bf9bc-ldrv4" Dec 12 17:26:07.998378 kubelet[2742]: I1212 17:26:07.998304 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fa718c-55cb-4e4e-b5df-4ba1848de957-config-volume\") pod \"coredns-668d6bf9bc-zl8l8\" (UID: \"06fa718c-55cb-4e4e-b5df-4ba1848de957\") " pod="kube-system/coredns-668d6bf9bc-zl8l8" Dec 12 17:26:07.998378 kubelet[2742]: I1212 17:26:07.998334 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6472f4fc-a85b-439e-a967-e984795bc3cd-config-volume\") pod \"coredns-668d6bf9bc-ldrv4\" (UID: \"6472f4fc-a85b-439e-a967-e984795bc3cd\") " pod="kube-system/coredns-668d6bf9bc-ldrv4" Dec 12 17:26:08.263966 containerd[1523]: time="2025-12-12T17:26:08.263761721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl8l8,Uid:06fa718c-55cb-4e4e-b5df-4ba1848de957,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:08.274894 containerd[1523]: time="2025-12-12T17:26:08.274609283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ldrv4,Uid:6472f4fc-a85b-439e-a967-e984795bc3cd,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:09.707517 systemd-networkd[1412]: cilium_host: Link UP Dec 12 17:26:09.711018 systemd-networkd[1412]: cilium_net: Link UP Dec 12 17:26:09.711696 systemd-networkd[1412]: cilium_net: Gained carrier Dec 12 17:26:09.713280 systemd-networkd[1412]: cilium_host: Gained carrier Dec 12 17:26:09.846643 systemd-networkd[1412]: cilium_vxlan: Link UP Dec 12 17:26:09.846651 systemd-networkd[1412]: cilium_vxlan: Gained carrier Dec 12 17:26:10.154974 systemd-networkd[1412]: cilium_net: Gained IPv6LL Dec 12 17:26:10.169899 kernel: NET: Registered PF_ALG protocol family Dec 12 17:26:10.171447 systemd-networkd[1412]: cilium_host: Gained IPv6LL Dec 12 17:26:10.956778 systemd-networkd[1412]: lxc_health: Link UP Dec 12 17:26:10.965212 systemd-networkd[1412]: lxc_health: Gained carrier Dec 12 17:26:11.359725 kernel: eth0: renamed from tmp5c45a Dec 12 17:26:11.361034 systemd-networkd[1412]: lxcd7b07de9d751: Link UP Dec 12 17:26:11.365992 kernel: eth0: renamed from tmp51c7c Dec 12 17:26:11.376164 systemd-networkd[1412]: lxc17d21af1e598: Link UP Dec 12 17:26:11.378284 systemd-networkd[1412]: lxcd7b07de9d751: Gained carrier Dec 12 17:26:11.378546 systemd-networkd[1412]: lxc17d21af1e598: Gained carrier Dec 12 17:26:11.731688 systemd-networkd[1412]: cilium_vxlan: Gained IPv6LL Dec 12 17:26:11.968470 kubelet[2742]: I1212 17:26:11.967350 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdqq7" podStartSLOduration=9.503932975 podStartE2EDuration="14.967329283s" podCreationTimestamp="2025-12-12 17:25:57 +0000 UTC" firstStartedPulling="2025-12-12 17:25:58.093584097 +0000 UTC m=+5.792014404" lastFinishedPulling="2025-12-12 17:26:03.556980405 +0000 UTC m=+11.255410712" observedRunningTime="2025-12-12 17:26:08.657066392 +0000 UTC m=+16.355496779" watchObservedRunningTime="2025-12-12 17:26:11.967329283 +0000 UTC m=+19.665759590" Dec 12 17:26:12.435997 systemd-networkd[1412]: lxc17d21af1e598: Gained IPv6LL Dec 12 17:26:12.692581 systemd-networkd[1412]: lxc_health: Gained IPv6LL Dec 12 17:26:13.075379 systemd-networkd[1412]: lxcd7b07de9d751: Gained IPv6LL Dec 12 17:26:15.795476 containerd[1523]: time="2025-12-12T17:26:15.795416515Z" level=info msg="connecting to shim 51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc" address="unix:///run/containerd/s/944aec8d4cc81fdeec777690e7f2b50ccbddab9e6ed8339fab70f49a26aac6f8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:15.799518 containerd[1523]: time="2025-12-12T17:26:15.799461135Z" level=info msg="connecting to shim 5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b" address="unix:///run/containerd/s/ee3fdfb7296a7cf27ce5e8ac9650fc9f508a07a1f316a7ed52f7b299c68c9edd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:15.856263 systemd[1]: Started cri-containerd-51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc.scope - libcontainer container 51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc. Dec 12 17:26:15.860046 systemd[1]: Started cri-containerd-5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b.scope - libcontainer container 5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b. Dec 12 17:26:15.958605 containerd[1523]: time="2025-12-12T17:26:15.958552532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ldrv4,Uid:6472f4fc-a85b-439e-a967-e984795bc3cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc\"" Dec 12 17:26:15.964049 containerd[1523]: time="2025-12-12T17:26:15.963901075Z" level=info msg="CreateContainer within sandbox \"51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:26:15.964282 containerd[1523]: time="2025-12-12T17:26:15.964256698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl8l8,Uid:06fa718c-55cb-4e4e-b5df-4ba1848de957,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b\"" Dec 12 17:26:15.979883 containerd[1523]: time="2025-12-12T17:26:15.979008164Z" level=info msg="Container 22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:15.991420 containerd[1523]: time="2025-12-12T17:26:15.991371876Z" level=info msg="CreateContainer within sandbox \"5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:26:15.996987 containerd[1523]: time="2025-12-12T17:26:15.996919352Z" level=info msg="CreateContainer within sandbox \"51c7c9cdddcb623ef7ba4c05d4f4b210fce810208c9b4adc6a19ab93acfd95bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770\"" Dec 12 17:26:15.999306 containerd[1523]: time="2025-12-12T17:26:15.997653919Z" level=info msg="StartContainer for \"22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770\"" Dec 12 17:26:15.999306 containerd[1523]: time="2025-12-12T17:26:15.998676904Z" level=info msg="connecting to shim 22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770" address="unix:///run/containerd/s/944aec8d4cc81fdeec777690e7f2b50ccbddab9e6ed8339fab70f49a26aac6f8" protocol=ttrpc version=3 Dec 12 17:26:16.004733 containerd[1523]: time="2025-12-12T17:26:16.004665326Z" level=info msg="Container 5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:16.013356 containerd[1523]: time="2025-12-12T17:26:16.013243830Z" level=info msg="CreateContainer within sandbox \"5c45af24ad0b35f5a893489726aba0578b4353d2661619ef7a011b5384aa5e4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e\"" Dec 12 17:26:16.014810 containerd[1523]: time="2025-12-12T17:26:16.014551433Z" level=info msg="StartContainer for \"5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e\"" Dec 12 17:26:16.018762 containerd[1523]: time="2025-12-12T17:26:16.018703256Z" level=info msg="connecting to shim 5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e" address="unix:///run/containerd/s/ee3fdfb7296a7cf27ce5e8ac9650fc9f508a07a1f316a7ed52f7b299c68c9edd" protocol=ttrpc version=3 Dec 12 17:26:16.028076 systemd[1]: Started cri-containerd-22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770.scope - libcontainer container 22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770. Dec 12 17:26:16.046153 systemd[1]: Started cri-containerd-5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e.scope - libcontainer container 5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e. Dec 12 17:26:16.098426 containerd[1523]: time="2025-12-12T17:26:16.098355268Z" level=info msg="StartContainer for \"22b80204479b361dc0f737bcb7e793d3838b6de7aa9896b9c0bbc2f9e1c45770\" returns successfully" Dec 12 17:26:16.111272 containerd[1523]: time="2025-12-12T17:26:16.111106797Z" level=info msg="StartContainer for \"5bd186b9494efd0c99e7cb53a0740057176b556ec57d2482bb7681b87c85e28e\" returns successfully" Dec 12 17:26:16.689016 kubelet[2742]: I1212 17:26:16.688696 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zl8l8" podStartSLOduration=19.68867439 podStartE2EDuration="19.68867439s" podCreationTimestamp="2025-12-12 17:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:16.686414407 +0000 UTC m=+24.384844714" watchObservedRunningTime="2025-12-12 17:26:16.68867439 +0000 UTC m=+24.387104697" Dec 12 17:26:16.706525 kubelet[2742]: I1212 17:26:16.706442 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ldrv4" podStartSLOduration=19.706415996 podStartE2EDuration="19.706415996s" podCreationTimestamp="2025-12-12 17:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:16.705023987 +0000 UTC m=+24.403454294" watchObservedRunningTime="2025-12-12 17:26:16.706415996 +0000 UTC m=+24.404846303" Dec 12 17:26:16.778441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785850661.mount: Deactivated successfully. Dec 12 17:26:26.711826 kubelet[2742]: I1212 17:26:26.711736 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:28:16.401569 systemd[1]: Started sshd@7-91.99.219.209:22-139.178.89.65:49638.service - OpenSSH per-connection server daemon (139.178.89.65:49638). Dec 12 17:28:17.419280 sshd[4079]: Accepted publickey for core from 139.178.89.65 port 49638 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:17.421761 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:17.428919 systemd-logind[1494]: New session 8 of user core. Dec 12 17:28:17.437115 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:28:18.206957 sshd[4082]: Connection closed by 139.178.89.65 port 49638 Dec 12 17:28:18.207752 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:18.213381 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:28:18.214107 systemd[1]: sshd@7-91.99.219.209:22-139.178.89.65:49638.service: Deactivated successfully. Dec 12 17:28:18.218085 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:28:18.222728 systemd-logind[1494]: Removed session 8. Dec 12 17:28:23.381710 systemd[1]: Started sshd@8-91.99.219.209:22-139.178.89.65:34498.service - OpenSSH per-connection server daemon (139.178.89.65:34498). Dec 12 17:28:24.407290 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 34498 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:24.410431 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:24.415464 systemd-logind[1494]: New session 9 of user core. Dec 12 17:28:24.428250 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:28:25.179013 sshd[4101]: Connection closed by 139.178.89.65 port 34498 Dec 12 17:28:25.180275 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:25.188154 systemd[1]: sshd@8-91.99.219.209:22-139.178.89.65:34498.service: Deactivated successfully. Dec 12 17:28:25.191323 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:28:25.194937 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:28:25.196510 systemd-logind[1494]: Removed session 9. Dec 12 17:28:30.349134 systemd[1]: Started sshd@9-91.99.219.209:22-139.178.89.65:34506.service - OpenSSH per-connection server daemon (139.178.89.65:34506). Dec 12 17:28:31.337857 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 34506 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:31.340297 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:31.347450 systemd-logind[1494]: New session 10 of user core. Dec 12 17:28:31.350083 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:28:32.080416 sshd[4118]: Connection closed by 139.178.89.65 port 34506 Dec 12 17:28:32.081504 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:32.087664 systemd[1]: sshd@9-91.99.219.209:22-139.178.89.65:34506.service: Deactivated successfully. Dec 12 17:28:32.090976 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:28:32.092958 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:28:32.094802 systemd-logind[1494]: Removed session 10. Dec 12 17:28:32.259301 systemd[1]: Started sshd@10-91.99.219.209:22-139.178.89.65:47198.service - OpenSSH per-connection server daemon (139.178.89.65:47198). Dec 12 17:28:33.277290 sshd[4131]: Accepted publickey for core from 139.178.89.65 port 47198 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:33.280105 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:33.286606 systemd-logind[1494]: New session 11 of user core. Dec 12 17:28:33.294304 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:28:34.085049 sshd[4134]: Connection closed by 139.178.89.65 port 47198 Dec 12 17:28:34.086224 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:34.093229 systemd[1]: sshd@10-91.99.219.209:22-139.178.89.65:47198.service: Deactivated successfully. Dec 12 17:28:34.097152 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:28:34.098886 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:28:34.102306 systemd-logind[1494]: Removed session 11. Dec 12 17:28:34.269179 systemd[1]: Started sshd@11-91.99.219.209:22-139.178.89.65:47204.service - OpenSSH per-connection server daemon (139.178.89.65:47204). Dec 12 17:28:35.286806 sshd[4144]: Accepted publickey for core from 139.178.89.65 port 47204 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:35.291465 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.302912 systemd-logind[1494]: New session 12 of user core. Dec 12 17:28:35.319641 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:28:36.055271 sshd[4147]: Connection closed by 139.178.89.65 port 47204 Dec 12 17:28:36.055984 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:36.061904 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:28:36.062762 systemd[1]: sshd@11-91.99.219.209:22-139.178.89.65:47204.service: Deactivated successfully. Dec 12 17:28:36.066405 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:28:36.070416 systemd-logind[1494]: Removed session 12. Dec 12 17:28:41.224330 systemd[1]: Started sshd@12-91.99.219.209:22-139.178.89.65:35176.service - OpenSSH per-connection server daemon (139.178.89.65:35176). Dec 12 17:28:42.233380 sshd[4159]: Accepted publickey for core from 139.178.89.65 port 35176 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:42.235204 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:42.241656 systemd-logind[1494]: New session 13 of user core. Dec 12 17:28:42.253902 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:28:42.993212 sshd[4162]: Connection closed by 139.178.89.65 port 35176 Dec 12 17:28:42.994073 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:42.999803 systemd[1]: sshd@12-91.99.219.209:22-139.178.89.65:35176.service: Deactivated successfully. Dec 12 17:28:42.999902 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:28:43.002929 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:28:43.006471 systemd-logind[1494]: Removed session 13. Dec 12 17:28:43.166298 systemd[1]: Started sshd@13-91.99.219.209:22-139.178.89.65:35178.service - OpenSSH per-connection server daemon (139.178.89.65:35178). Dec 12 17:28:44.162242 sshd[4174]: Accepted publickey for core from 139.178.89.65 port 35178 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:44.164664 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:44.172721 systemd-logind[1494]: New session 14 of user core. Dec 12 17:28:44.178381 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:45.075967 sshd[4177]: Connection closed by 139.178.89.65 port 35178 Dec 12 17:28:45.078192 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:45.085246 systemd[1]: sshd@13-91.99.219.209:22-139.178.89.65:35178.service: Deactivated successfully. Dec 12 17:28:45.091164 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:45.096441 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:45.098082 systemd-logind[1494]: Removed session 14. Dec 12 17:28:45.249807 systemd[1]: Started sshd@14-91.99.219.209:22-139.178.89.65:35184.service - OpenSSH per-connection server daemon (139.178.89.65:35184). Dec 12 17:28:46.223821 sshd[4187]: Accepted publickey for core from 139.178.89.65 port 35184 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:46.227089 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:46.234471 systemd-logind[1494]: New session 15 of user core. Dec 12 17:28:46.238163 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:47.474866 sshd[4190]: Connection closed by 139.178.89.65 port 35184 Dec 12 17:28:47.474731 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:47.484271 systemd[1]: sshd@14-91.99.219.209:22-139.178.89.65:35184.service: Deactivated successfully. Dec 12 17:28:47.487152 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:47.488435 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:47.491457 systemd-logind[1494]: Removed session 15. Dec 12 17:28:47.645815 systemd[1]: Started sshd@15-91.99.219.209:22-139.178.89.65:35186.service - OpenSSH per-connection server daemon (139.178.89.65:35186). Dec 12 17:28:48.649787 sshd[4207]: Accepted publickey for core from 139.178.89.65 port 35186 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:48.651499 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:48.658125 systemd-logind[1494]: New session 16 of user core. Dec 12 17:28:48.672777 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:28:49.544633 sshd[4210]: Connection closed by 139.178.89.65 port 35186 Dec 12 17:28:49.545751 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:49.554025 systemd[1]: sshd@15-91.99.219.209:22-139.178.89.65:35186.service: Deactivated successfully. Dec 12 17:28:49.557347 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:28:49.559872 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:28:49.561340 systemd-logind[1494]: Removed session 16. Dec 12 17:28:49.721526 systemd[1]: Started sshd@16-91.99.219.209:22-139.178.89.65:35196.service - OpenSSH per-connection server daemon (139.178.89.65:35196). Dec 12 17:28:50.735077 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 35196 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:50.736711 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:50.743950 systemd-logind[1494]: New session 17 of user core. Dec 12 17:28:50.750142 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:28:51.485641 sshd[4222]: Connection closed by 139.178.89.65 port 35196 Dec 12 17:28:51.486322 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:51.494236 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:28:51.495092 systemd[1]: sshd@16-91.99.219.209:22-139.178.89.65:35196.service: Deactivated successfully. Dec 12 17:28:51.500311 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:28:51.503117 systemd-logind[1494]: Removed session 17. Dec 12 17:28:56.658161 systemd[1]: Started sshd@17-91.99.219.209:22-139.178.89.65:48362.service - OpenSSH per-connection server daemon (139.178.89.65:48362). Dec 12 17:28:57.658814 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 48362 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:28:57.660615 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:57.666724 systemd-logind[1494]: New session 18 of user core. Dec 12 17:28:57.673263 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:28:58.411544 sshd[4241]: Connection closed by 139.178.89.65 port 48362 Dec 12 17:28:58.412571 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:58.420241 systemd[1]: sshd@17-91.99.219.209:22-139.178.89.65:48362.service: Deactivated successfully. Dec 12 17:28:58.424019 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:28:58.425781 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:28:58.428024 systemd-logind[1494]: Removed session 18. Dec 12 17:29:03.592622 systemd[1]: Started sshd@18-91.99.219.209:22-139.178.89.65:46244.service - OpenSSH per-connection server daemon (139.178.89.65:46244). Dec 12 17:29:04.610947 sshd[4255]: Accepted publickey for core from 139.178.89.65 port 46244 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:29:04.612325 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:04.621484 systemd-logind[1494]: New session 19 of user core. Dec 12 17:29:04.629427 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:29:05.383482 sshd[4258]: Connection closed by 139.178.89.65 port 46244 Dec 12 17:29:05.384447 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:05.390676 systemd[1]: sshd@18-91.99.219.209:22-139.178.89.65:46244.service: Deactivated successfully. Dec 12 17:29:05.393471 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:29:05.397633 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:29:05.399179 systemd-logind[1494]: Removed session 19. Dec 12 17:29:05.572461 systemd[1]: Started sshd@19-91.99.219.209:22-139.178.89.65:46250.service - OpenSSH per-connection server daemon (139.178.89.65:46250). Dec 12 17:29:06.643230 sshd[4270]: Accepted publickey for core from 139.178.89.65 port 46250 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:29:06.645287 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:06.650366 systemd-logind[1494]: New session 20 of user core. Dec 12 17:29:06.656171 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:29:10.095100 containerd[1523]: time="2025-12-12T17:29:10.095033491Z" level=info msg="StopContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" with timeout 30 (s)" Dec 12 17:29:10.096619 containerd[1523]: time="2025-12-12T17:29:10.096583726Z" level=info msg="Stop container \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" with signal terminated" Dec 12 17:29:10.118807 containerd[1523]: time="2025-12-12T17:29:10.118284889Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:29:10.124609 systemd[1]: cri-containerd-bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665.scope: Deactivated successfully. Dec 12 17:29:10.130012 containerd[1523]: time="2025-12-12T17:29:10.129943967Z" level=info msg="received container exit event container_id:\"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" id:\"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" pid:3299 exited_at:{seconds:1765560550 nanos:129385489}" Dec 12 17:29:10.131400 containerd[1523]: time="2025-12-12T17:29:10.131341002Z" level=info msg="StopContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" with timeout 2 (s)" Dec 12 17:29:10.132610 containerd[1523]: time="2025-12-12T17:29:10.132392478Z" level=info msg="Stop container \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" with signal terminated" Dec 12 17:29:10.145688 systemd-networkd[1412]: lxc_health: Link DOWN Dec 12 17:29:10.146910 systemd-networkd[1412]: lxc_health: Lost carrier Dec 12 17:29:10.176076 systemd[1]: cri-containerd-06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c.scope: Deactivated successfully. Dec 12 17:29:10.176406 systemd[1]: cri-containerd-06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c.scope: Consumed 8.143s CPU time, 124.7M memory peak, 128K read from disk, 12.9M written to disk. Dec 12 17:29:10.188040 containerd[1523]: time="2025-12-12T17:29:10.187943561Z" level=info msg="received container exit event container_id:\"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" id:\"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" pid:3371 exited_at:{seconds:1765560550 nanos:187230284}" Dec 12 17:29:10.202258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665-rootfs.mount: Deactivated successfully. Dec 12 17:29:10.228106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c-rootfs.mount: Deactivated successfully. Dec 12 17:29:10.229223 containerd[1523]: time="2025-12-12T17:29:10.229170335Z" level=info msg="StopContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" returns successfully" Dec 12 17:29:10.231944 containerd[1523]: time="2025-12-12T17:29:10.231888565Z" level=info msg="StopPodSandbox for \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\"" Dec 12 17:29:10.232164 containerd[1523]: time="2025-12-12T17:29:10.232138604Z" level=info msg="Container to stop \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.238005 containerd[1523]: time="2025-12-12T17:29:10.237756144Z" level=info msg="StopContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" returns successfully" Dec 12 17:29:10.239667 containerd[1523]: time="2025-12-12T17:29:10.239589138Z" level=info msg="StopPodSandbox for \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\"" Dec 12 17:29:10.240072 containerd[1523]: time="2025-12-12T17:29:10.239997097Z" level=info msg="Container to stop \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.240072 containerd[1523]: time="2025-12-12T17:29:10.240022616Z" level=info msg="Container to stop \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.240292 containerd[1523]: time="2025-12-12T17:29:10.240153336Z" level=info msg="Container to stop \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.240292 containerd[1523]: time="2025-12-12T17:29:10.240222296Z" level=info msg="Container to stop \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.240292 containerd[1523]: time="2025-12-12T17:29:10.240232896Z" level=info msg="Container to stop \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:29:10.252563 systemd[1]: cri-containerd-057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa.scope: Deactivated successfully. Dec 12 17:29:10.258328 systemd[1]: cri-containerd-6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b.scope: Deactivated successfully. Dec 12 17:29:10.262140 containerd[1523]: time="2025-12-12T17:29:10.261774419Z" level=info msg="received sandbox exit event container_id:\"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" id:\"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" exit_status:137 exited_at:{seconds:1765560550 nanos:261250101}" monitor_name=podsandbox Dec 12 17:29:10.263731 containerd[1523]: time="2025-12-12T17:29:10.263686852Z" level=info msg="received sandbox exit event container_id:\"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" id:\"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" exit_status:137 exited_at:{seconds:1765560550 nanos:263237734}" monitor_name=podsandbox Dec 12 17:29:10.295418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa-rootfs.mount: Deactivated successfully. Dec 12 17:29:10.303018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b-rootfs.mount: Deactivated successfully. Dec 12 17:29:10.308849 containerd[1523]: time="2025-12-12T17:29:10.306685780Z" level=info msg="shim disconnected" id=6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b namespace=k8s.io Dec 12 17:29:10.308849 containerd[1523]: time="2025-12-12T17:29:10.306885019Z" level=warning msg="cleaning up after shim disconnected" id=6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b namespace=k8s.io Dec 12 17:29:10.308849 containerd[1523]: time="2025-12-12T17:29:10.306927859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:29:10.309756 containerd[1523]: time="2025-12-12T17:29:10.309712249Z" level=info msg="shim disconnected" id=057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa namespace=k8s.io Dec 12 17:29:10.310352 containerd[1523]: time="2025-12-12T17:29:10.310158487Z" level=warning msg="cleaning up after shim disconnected" id=057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa namespace=k8s.io Dec 12 17:29:10.310352 containerd[1523]: time="2025-12-12T17:29:10.310210087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:29:10.328093 containerd[1523]: time="2025-12-12T17:29:10.328043464Z" level=info msg="TearDown network for sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" successfully" Dec 12 17:29:10.328093 containerd[1523]: time="2025-12-12T17:29:10.328084384Z" level=info msg="StopPodSandbox for \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" returns successfully" Dec 12 17:29:10.330686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b-shm.mount: Deactivated successfully. Dec 12 17:29:10.332284 containerd[1523]: time="2025-12-12T17:29:10.332220169Z" level=info msg="received sandbox container exit event sandbox_id:\"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" exit_status:137 exited_at:{seconds:1765560550 nanos:263237734}" monitor_name=criService Dec 12 17:29:10.341838 containerd[1523]: time="2025-12-12T17:29:10.340796699Z" level=info msg="received sandbox container exit event sandbox_id:\"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" exit_status:137 exited_at:{seconds:1765560550 nanos:261250101}" monitor_name=criService Dec 12 17:29:10.342190 containerd[1523]: time="2025-12-12T17:29:10.341193897Z" level=info msg="TearDown network for sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" successfully" Dec 12 17:29:10.342230 containerd[1523]: time="2025-12-12T17:29:10.342194134Z" level=info msg="StopPodSandbox for \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" returns successfully" Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.391803 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cni-path\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.391894 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-lib-modules\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.391918 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-bpf-maps\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.391946 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cni-path" (OuterVolumeSpecName: "cni-path") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.391997 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4mc2\" (UniqueName: \"kubernetes.io/projected/6525ada2-da67-4802-9c8b-74bc16d633e3-kube-api-access-d4mc2\") pod \"6525ada2-da67-4802-9c8b-74bc16d633e3\" (UID: \"6525ada2-da67-4802-9c8b-74bc16d633e3\") " Dec 12 17:29:10.393933 kubelet[2742]: I1212 17:29:10.392034 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6525ada2-da67-4802-9c8b-74bc16d633e3-cilium-config-path\") pod \"6525ada2-da67-4802-9c8b-74bc16d633e3\" (UID: \"6525ada2-da67-4802-9c8b-74bc16d633e3\") " Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392021 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392055 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hostproc\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392079 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-net\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392104 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/809d2154-cff2-4bf3-ba53-5676f5eddbb6-clustermesh-secrets\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392128 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-run\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394512 kubelet[2742]: I1212 17:29:10.392148 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-cgroup\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392179 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-config-path\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392207 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ddkb\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-kube-api-access-6ddkb\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392229 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-kernel\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392250 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-etc-cni-netd\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392276 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hubble-tls\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394655 kubelet[2742]: I1212 17:29:10.392298 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-xtables-lock\") pod \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\" (UID: \"809d2154-cff2-4bf3-ba53-5676f5eddbb6\") " Dec 12 17:29:10.394805 kubelet[2742]: I1212 17:29:10.392359 2742 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cni-path\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.394805 kubelet[2742]: I1212 17:29:10.392375 2742 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-lib-modules\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.394805 kubelet[2742]: I1212 17:29:10.392416 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.395572 kubelet[2742]: I1212 17:29:10.395412 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hostproc" (OuterVolumeSpecName: "hostproc") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.395572 kubelet[2742]: I1212 17:29:10.395513 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.396141 kubelet[2742]: I1212 17:29:10.396068 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.396390 kubelet[2742]: I1212 17:29:10.396282 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6525ada2-da67-4802-9c8b-74bc16d633e3-kube-api-access-d4mc2" (OuterVolumeSpecName: "kube-api-access-d4mc2") pod "6525ada2-da67-4802-9c8b-74bc16d633e3" (UID: "6525ada2-da67-4802-9c8b-74bc16d633e3"). InnerVolumeSpecName "kube-api-access-d4mc2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:29:10.397379 kubelet[2742]: I1212 17:29:10.397270 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.397379 kubelet[2742]: I1212 17:29:10.397321 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.397569 kubelet[2742]: I1212 17:29:10.397548 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.399857 kubelet[2742]: I1212 17:29:10.397771 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:29:10.403651 kubelet[2742]: I1212 17:29:10.399540 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6525ada2-da67-4802-9c8b-74bc16d633e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6525ada2-da67-4802-9c8b-74bc16d633e3" (UID: "6525ada2-da67-4802-9c8b-74bc16d633e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:29:10.403922 kubelet[2742]: I1212 17:29:10.403511 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:29:10.404727 kubelet[2742]: I1212 17:29:10.404695 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809d2154-cff2-4bf3-ba53-5676f5eddbb6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:29:10.405386 kubelet[2742]: I1212 17:29:10.404717 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:29:10.405645 kubelet[2742]: I1212 17:29:10.405619 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-kube-api-access-6ddkb" (OuterVolumeSpecName: "kube-api-access-6ddkb") pod "809d2154-cff2-4bf3-ba53-5676f5eddbb6" (UID: "809d2154-cff2-4bf3-ba53-5676f5eddbb6"). InnerVolumeSpecName "kube-api-access-6ddkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:29:10.493109 kubelet[2742]: I1212 17:29:10.493043 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-config-path\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493109 kubelet[2742]: I1212 17:29:10.493101 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ddkb\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-kube-api-access-6ddkb\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493109 kubelet[2742]: I1212 17:29:10.493129 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-kernel\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493149 2742 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-xtables-lock\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493169 2742 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-etc-cni-netd\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493188 2742 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hubble-tls\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493205 2742 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-bpf-maps\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493222 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4mc2\" (UniqueName: \"kubernetes.io/projected/6525ada2-da67-4802-9c8b-74bc16d633e3-kube-api-access-d4mc2\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493244 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6525ada2-da67-4802-9c8b-74bc16d633e3-cilium-config-path\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493262 2742 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-hostproc\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493413 kubelet[2742]: I1212 17:29:10.493279 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-run\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493849 kubelet[2742]: I1212 17:29:10.493298 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-cilium-cgroup\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493849 kubelet[2742]: I1212 17:29:10.493320 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/809d2154-cff2-4bf3-ba53-5676f5eddbb6-host-proc-sys-net\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.493849 kubelet[2742]: I1212 17:29:10.493338 2742 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/809d2154-cff2-4bf3-ba53-5676f5eddbb6-clustermesh-secrets\") on node \"ci-4459-2-2-4-c728b0285d\" DevicePath \"\"" Dec 12 17:29:10.504927 systemd[1]: Removed slice kubepods-burstable-pod809d2154_cff2_4bf3_ba53_5676f5eddbb6.slice - libcontainer container kubepods-burstable-pod809d2154_cff2_4bf3_ba53_5676f5eddbb6.slice. Dec 12 17:29:10.505073 systemd[1]: kubepods-burstable-pod809d2154_cff2_4bf3_ba53_5676f5eddbb6.slice: Consumed 8.270s CPU time, 125.1M memory peak, 128K read from disk, 12.9M written to disk. Dec 12 17:29:10.509412 systemd[1]: Removed slice kubepods-besteffort-pod6525ada2_da67_4802_9c8b_74bc16d633e3.slice - libcontainer container kubepods-besteffort-pod6525ada2_da67_4802_9c8b_74bc16d633e3.slice. Dec 12 17:29:11.201252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa-shm.mount: Deactivated successfully. Dec 12 17:29:11.201865 systemd[1]: var-lib-kubelet-pods-809d2154\x2dcff2\x2d4bf3\x2dba53\x2d5676f5eddbb6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ddkb.mount: Deactivated successfully. Dec 12 17:29:11.202073 systemd[1]: var-lib-kubelet-pods-6525ada2\x2dda67\x2d4802\x2d9c8b\x2d74bc16d633e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd4mc2.mount: Deactivated successfully. Dec 12 17:29:11.202212 systemd[1]: var-lib-kubelet-pods-809d2154\x2dcff2\x2d4bf3\x2dba53\x2d5676f5eddbb6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:29:11.202530 systemd[1]: var-lib-kubelet-pods-809d2154\x2dcff2\x2d4bf3\x2dba53\x2d5676f5eddbb6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:29:11.263132 kubelet[2742]: I1212 17:29:11.262710 2742 scope.go:117] "RemoveContainer" containerID="bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665" Dec 12 17:29:11.270814 containerd[1523]: time="2025-12-12T17:29:11.270432922Z" level=info msg="RemoveContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\"" Dec 12 17:29:11.282932 containerd[1523]: time="2025-12-12T17:29:11.282869285Z" level=info msg="RemoveContainer for \"bb1ee257f727459be88f3901b6d8dfea2cf5b1dd24471c6b07e8354dfc1d6665\" returns successfully" Dec 12 17:29:11.284347 kubelet[2742]: I1212 17:29:11.284233 2742 scope.go:117] "RemoveContainer" containerID="06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c" Dec 12 17:29:11.289384 containerd[1523]: time="2025-12-12T17:29:11.288714268Z" level=info msg="RemoveContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\"" Dec 12 17:29:11.297022 containerd[1523]: time="2025-12-12T17:29:11.296917324Z" level=info msg="RemoveContainer for \"06cf996cd898ce11a9764ad5897519ed3e2cc0f7e0f07bd0dfcf1d0ff688f12c\" returns successfully" Dec 12 17:29:11.297739 kubelet[2742]: I1212 17:29:11.297551 2742 scope.go:117] "RemoveContainer" containerID="43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095" Dec 12 17:29:11.300045 containerd[1523]: time="2025-12-12T17:29:11.300005595Z" level=info msg="RemoveContainer for \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\"" Dec 12 17:29:11.308089 containerd[1523]: time="2025-12-12T17:29:11.308042691Z" level=info msg="RemoveContainer for \"43991acdacfaae84fe0b328beb257bfac4b196f4de8e295469683fdc269f1095\" returns successfully" Dec 12 17:29:11.308800 kubelet[2742]: I1212 17:29:11.308380 2742 scope.go:117] "RemoveContainer" containerID="912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d" Dec 12 17:29:11.312754 containerd[1523]: time="2025-12-12T17:29:11.312255999Z" level=info msg="RemoveContainer for \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\"" Dec 12 17:29:11.320395 containerd[1523]: time="2025-12-12T17:29:11.320151016Z" level=info msg="RemoveContainer for \"912f02d8040915fcdb35a853a72e0e6a70ccea79ee28344640ea436e9465e49d\" returns successfully" Dec 12 17:29:11.320876 kubelet[2742]: I1212 17:29:11.320586 2742 scope.go:117] "RemoveContainer" containerID="02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83" Dec 12 17:29:11.322986 containerd[1523]: time="2025-12-12T17:29:11.322937567Z" level=info msg="RemoveContainer for \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\"" Dec 12 17:29:11.327736 containerd[1523]: time="2025-12-12T17:29:11.327688994Z" level=info msg="RemoveContainer for \"02efbb4fd4a840149b8463b19d74b741a77ae315f85d55afbf756709ec936e83\" returns successfully" Dec 12 17:29:11.328064 kubelet[2742]: I1212 17:29:11.328008 2742 scope.go:117] "RemoveContainer" containerID="4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255" Dec 12 17:29:11.330615 containerd[1523]: time="2025-12-12T17:29:11.330561265Z" level=info msg="RemoveContainer for \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\"" Dec 12 17:29:11.334127 containerd[1523]: time="2025-12-12T17:29:11.334057975Z" level=info msg="RemoveContainer for \"4b4cc48c1e5ab732f0018ce87585e2e74bf08414910e6f8ed4f8b8e08f634255\" returns successfully" Dec 12 17:29:12.176269 sshd[4273]: Connection closed by 139.178.89.65 port 46250 Dec 12 17:29:12.176177 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:12.184280 systemd[1]: sshd@19-91.99.219.209:22-139.178.89.65:46250.service: Deactivated successfully. Dec 12 17:29:12.187333 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:29:12.189059 systemd[1]: session-20.scope: Consumed 2.219s CPU time, 25.7M memory peak. Dec 12 17:29:12.190528 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:29:12.193031 systemd-logind[1494]: Removed session 20. Dec 12 17:29:12.351434 systemd[1]: Started sshd@20-91.99.219.209:22-139.178.89.65:48790.service - OpenSSH per-connection server daemon (139.178.89.65:48790). Dec 12 17:29:12.498478 kubelet[2742]: I1212 17:29:12.498246 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6525ada2-da67-4802-9c8b-74bc16d633e3" path="/var/lib/kubelet/pods/6525ada2-da67-4802-9c8b-74bc16d633e3/volumes" Dec 12 17:29:12.500205 kubelet[2742]: I1212 17:29:12.499751 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809d2154-cff2-4bf3-ba53-5676f5eddbb6" path="/var/lib/kubelet/pods/809d2154-cff2-4bf3-ba53-5676f5eddbb6/volumes" Dec 12 17:29:12.654599 kubelet[2742]: E1212 17:29:12.654429 2742 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:29:13.346862 sshd[4418]: Accepted publickey for core from 139.178.89.65 port 48790 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:29:13.351256 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:13.358532 systemd-logind[1494]: New session 21 of user core. Dec 12 17:29:13.364138 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:29:15.355554 kubelet[2742]: I1212 17:29:15.355497 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="809d2154-cff2-4bf3-ba53-5676f5eddbb6" containerName="cilium-agent" Dec 12 17:29:15.355554 kubelet[2742]: I1212 17:29:15.355537 2742 memory_manager.go:355] "RemoveStaleState removing state" podUID="6525ada2-da67-4802-9c8b-74bc16d633e3" containerName="cilium-operator" Dec 12 17:29:15.365532 systemd[1]: Created slice kubepods-burstable-pod689ce4a1_2bd6_461f_af5b_61b8460eef44.slice - libcontainer container kubepods-burstable-pod689ce4a1_2bd6_461f_af5b_61b8460eef44.slice. Dec 12 17:29:15.378618 kubelet[2742]: I1212 17:29:15.378550 2742 status_manager.go:890] "Failed to get status for pod" podUID="689ce4a1-2bd6-461f-af5b-61b8460eef44" pod="kube-system/cilium-9m242" err="pods \"cilium-9m242\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" Dec 12 17:29:15.378791 kubelet[2742]: W1212 17:29:15.378641 2742 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459-2-2-4-c728b0285d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object Dec 12 17:29:15.378791 kubelet[2742]: E1212 17:29:15.378671 2742 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" logger="UnhandledError" Dec 12 17:29:15.378858 kubelet[2742]: W1212 17:29:15.378794 2742 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459-2-2-4-c728b0285d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object Dec 12 17:29:15.378858 kubelet[2742]: E1212 17:29:15.378809 2742 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" logger="UnhandledError" Dec 12 17:29:15.378911 kubelet[2742]: W1212 17:29:15.378865 2742 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459-2-2-4-c728b0285d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object Dec 12 17:29:15.378911 kubelet[2742]: E1212 17:29:15.378878 2742 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" logger="UnhandledError" Dec 12 17:29:15.379227 kubelet[2742]: W1212 17:29:15.379203 2742 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4459-2-2-4-c728b0285d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object Dec 12 17:29:15.379294 kubelet[2742]: E1212 17:29:15.379230 2742 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4459-2-2-4-c728b0285d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-4-c728b0285d' and this object" logger="UnhandledError" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430049 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/689ce4a1-2bd6-461f-af5b-61b8460eef44-hubble-tls\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430103 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-hostproc\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430122 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-cgroup\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430147 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-ipsec-secrets\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430165 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-host-proc-sys-kernel\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430242 kubelet[2742]: I1212 17:29:15.430188 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-run\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430203 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-cni-path\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430254 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-host-proc-sys-net\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430300 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkz6r\" (UniqueName: \"kubernetes.io/projected/689ce4a1-2bd6-461f-af5b-61b8460eef44-kube-api-access-vkz6r\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430329 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-lib-modules\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430354 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-clustermesh-secrets\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430579 kubelet[2742]: I1212 17:29:15.430389 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-bpf-maps\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430706 kubelet[2742]: I1212 17:29:15.430410 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-etc-cni-netd\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430706 kubelet[2742]: I1212 17:29:15.430427 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-config-path\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.430706 kubelet[2742]: I1212 17:29:15.430467 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/689ce4a1-2bd6-461f-af5b-61b8460eef44-xtables-lock\") pod \"cilium-9m242\" (UID: \"689ce4a1-2bd6-461f-af5b-61b8460eef44\") " pod="kube-system/cilium-9m242" Dec 12 17:29:15.459734 sshd[4421]: Connection closed by 139.178.89.65 port 48790 Dec 12 17:29:15.460450 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:15.466903 systemd[1]: sshd@20-91.99.219.209:22-139.178.89.65:48790.service: Deactivated successfully. Dec 12 17:29:15.470269 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:29:15.470750 systemd[1]: session-21.scope: Consumed 1.331s CPU time, 23.7M memory peak. Dec 12 17:29:15.471920 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:29:15.474225 systemd-logind[1494]: Removed session 21. Dec 12 17:29:15.632755 systemd[1]: Started sshd@21-91.99.219.209:22-139.178.89.65:48800.service - OpenSSH per-connection server daemon (139.178.89.65:48800). Dec 12 17:29:16.532436 kubelet[2742]: E1212 17:29:16.532251 2742 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 12 17:29:16.532436 kubelet[2742]: E1212 17:29:16.532394 2742 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-ipsec-secrets podName:689ce4a1-2bd6-461f-af5b-61b8460eef44 nodeName:}" failed. No retries permitted until 2025-12-12 17:29:17.032364233 +0000 UTC m=+204.730794540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-ipsec-secrets") pod "cilium-9m242" (UID: "689ce4a1-2bd6-461f-af5b-61b8460eef44") : failed to sync secret cache: timed out waiting for the condition Dec 12 17:29:16.533421 kubelet[2742]: E1212 17:29:16.532768 2742 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 12 17:29:16.533421 kubelet[2742]: E1212 17:29:16.532812 2742 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-clustermesh-secrets podName:689ce4a1-2bd6-461f-af5b-61b8460eef44 nodeName:}" failed. No retries permitted until 2025-12-12 17:29:17.032799153 +0000 UTC m=+204.731229460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/689ce4a1-2bd6-461f-af5b-61b8460eef44-clustermesh-secrets") pod "cilium-9m242" (UID: "689ce4a1-2bd6-461f-af5b-61b8460eef44") : failed to sync secret cache: timed out waiting for the condition Dec 12 17:29:16.533766 kubelet[2742]: E1212 17:29:16.533695 2742 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 17:29:16.533823 kubelet[2742]: E1212 17:29:16.533776 2742 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-config-path podName:689ce4a1-2bd6-461f-af5b-61b8460eef44 nodeName:}" failed. No retries permitted until 2025-12-12 17:29:17.033761073 +0000 UTC m=+204.732191380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/689ce4a1-2bd6-461f-af5b-61b8460eef44-cilium-config-path") pod "cilium-9m242" (UID: "689ce4a1-2bd6-461f-af5b-61b8460eef44") : failed to sync configmap cache: timed out waiting for the condition Dec 12 17:29:16.640959 sshd[4432]: Accepted publickey for core from 139.178.89.65 port 48800 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:29:16.643035 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:16.650041 systemd-logind[1494]: New session 22 of user core. Dec 12 17:29:16.657208 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:29:16.927523 kubelet[2742]: I1212 17:29:16.926042 2742 setters.go:602] "Node became not ready" node="ci-4459-2-2-4-c728b0285d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T17:29:16Z","lastTransitionTime":"2025-12-12T17:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 17:29:17.163259 update_engine[1495]: I20251212 17:29:17.163184 1495 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 17:29:17.163259 update_engine[1495]: I20251212 17:29:17.163244 1495 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 17:29:17.163864 update_engine[1495]: I20251212 17:29:17.163689 1495 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 17:29:17.164224 update_engine[1495]: I20251212 17:29:17.164176 1495 omaha_request_params.cc:62] Current group set to stable Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164286 1495 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164299 1495 update_attempter.cc:643] Scheduling an action processor start. Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164316 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164363 1495 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164425 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164433 1495 omaha_request_action.cc:272] Request: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: Dec 12 17:29:17.164626 update_engine[1495]: I20251212 17:29:17.164438 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:17.165434 locksmithd[1551]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 17:29:17.166801 update_engine[1495]: I20251212 17:29:17.166736 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:17.167949 update_engine[1495]: I20251212 17:29:17.167899 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:17.169259 update_engine[1495]: E20251212 17:29:17.169176 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 17:29:17.169358 update_engine[1495]: I20251212 17:29:17.169291 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 17:29:17.172314 containerd[1523]: time="2025-12-12T17:29:17.171409465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m242,Uid:689ce4a1-2bd6-461f-af5b-61b8460eef44,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:17.192484 containerd[1523]: time="2025-12-12T17:29:17.192415996Z" level=info msg="connecting to shim 3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:17.225176 systemd[1]: Started cri-containerd-3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f.scope - libcontainer container 3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f. Dec 12 17:29:17.259688 containerd[1523]: time="2025-12-12T17:29:17.259645111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m242,Uid:689ce4a1-2bd6-461f-af5b-61b8460eef44,Namespace:kube-system,Attempt:0,} returns sandbox id \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\"" Dec 12 17:29:17.265202 containerd[1523]: time="2025-12-12T17:29:17.265118314Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:29:17.278219 containerd[1523]: time="2025-12-12T17:29:17.277661521Z" level=info msg="Container 26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:17.290000 containerd[1523]: time="2025-12-12T17:29:17.289932407Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806\"" Dec 12 17:29:17.291891 containerd[1523]: time="2025-12-12T17:29:17.291255048Z" level=info msg="StartContainer for \"26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806\"" Dec 12 17:29:17.293582 containerd[1523]: time="2025-12-12T17:29:17.293478409Z" level=info msg="connecting to shim 26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" protocol=ttrpc version=3 Dec 12 17:29:17.320694 sshd[4436]: Connection closed by 139.178.89.65 port 48800 Dec 12 17:29:17.321997 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:17.325799 systemd[1]: Started cri-containerd-26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806.scope - libcontainer container 26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806. Dec 12 17:29:17.333083 systemd[1]: sshd@21-91.99.219.209:22-139.178.89.65:48800.service: Deactivated successfully. Dec 12 17:29:17.339599 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:29:17.342853 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:29:17.346620 systemd-logind[1494]: Removed session 22. Dec 12 17:29:17.389952 containerd[1523]: time="2025-12-12T17:29:17.389889500Z" level=info msg="StartContainer for \"26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806\" returns successfully" Dec 12 17:29:17.405217 systemd[1]: cri-containerd-26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806.scope: Deactivated successfully. Dec 12 17:29:17.409892 containerd[1523]: time="2025-12-12T17:29:17.409737150Z" level=info msg="received container exit event container_id:\"26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806\" id:\"26907b18c3e39f21fa4cf8a48c8e0be00216682ad25016a2acf3a7771c1ed806\" pid:4499 exited_at:{seconds:1765560557 nanos:408580950}" Dec 12 17:29:17.490248 systemd[1]: Started sshd@22-91.99.219.209:22-139.178.89.65:48810.service - OpenSSH per-connection server daemon (139.178.89.65:48810). Dec 12 17:29:17.656634 kubelet[2742]: E1212 17:29:17.656562 2742 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:29:18.055777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500403824.mount: Deactivated successfully. Dec 12 17:29:18.334338 containerd[1523]: time="2025-12-12T17:29:18.334168179Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:29:18.360097 containerd[1523]: time="2025-12-12T17:29:18.359092246Z" level=info msg="Container ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:18.371923 containerd[1523]: time="2025-12-12T17:29:18.371848180Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836\"" Dec 12 17:29:18.374269 containerd[1523]: time="2025-12-12T17:29:18.374215422Z" level=info msg="StartContainer for \"ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836\"" Dec 12 17:29:18.376605 containerd[1523]: time="2025-12-12T17:29:18.376514185Z" level=info msg="connecting to shim ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" protocol=ttrpc version=3 Dec 12 17:29:18.411107 systemd[1]: Started cri-containerd-ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836.scope - libcontainer container ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836. Dec 12 17:29:18.453639 containerd[1523]: time="2025-12-12T17:29:18.452911467Z" level=info msg="StartContainer for \"ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836\" returns successfully" Dec 12 17:29:18.463112 systemd[1]: cri-containerd-ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836.scope: Deactivated successfully. Dec 12 17:29:18.466398 containerd[1523]: time="2025-12-12T17:29:18.466342961Z" level=info msg="received container exit event container_id:\"ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836\" id:\"ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836\" pid:4552 exited_at:{seconds:1765560558 nanos:466052681}" Dec 12 17:29:18.499060 sshd[4536]: Accepted publickey for core from 139.178.89.65 port 48810 ssh2: RSA SHA256:iFtGnG2WH9XVjjUjszxJhaCaYvl4oOJ7+tJOMAqvDiA Dec 12 17:29:18.502418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1de2239e9c64b43d1f69a671ecb47810c451ee002e8a6b6cb6592ab63d7836-rootfs.mount: Deactivated successfully. Dec 12 17:29:18.506552 sshd-session[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:18.515984 systemd-logind[1494]: New session 23 of user core. Dec 12 17:29:18.519073 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:29:19.330046 containerd[1523]: time="2025-12-12T17:29:19.329983063Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:29:19.357313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235781662.mount: Deactivated successfully. Dec 12 17:29:19.363113 containerd[1523]: time="2025-12-12T17:29:19.361818755Z" level=info msg="Container aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:19.386423 containerd[1523]: time="2025-12-12T17:29:19.386036394Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb\"" Dec 12 17:29:19.392155 containerd[1523]: time="2025-12-12T17:29:19.392099844Z" level=info msg="StartContainer for \"aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb\"" Dec 12 17:29:19.397870 containerd[1523]: time="2025-12-12T17:29:19.397283372Z" level=info msg="connecting to shim aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" protocol=ttrpc version=3 Dec 12 17:29:19.426214 systemd[1]: Started cri-containerd-aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb.scope - libcontainer container aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb. Dec 12 17:29:19.505884 containerd[1523]: time="2025-12-12T17:29:19.505709947Z" level=info msg="StartContainer for \"aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb\" returns successfully" Dec 12 17:29:19.508469 systemd[1]: cri-containerd-aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb.scope: Deactivated successfully. Dec 12 17:29:19.512436 containerd[1523]: time="2025-12-12T17:29:19.512391557Z" level=info msg="received container exit event container_id:\"aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb\" id:\"aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb\" pid:4603 exited_at:{seconds:1765560559 nanos:511905877}" Dec 12 17:29:19.536256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aac7537c5dab709827572786c9749c0cd55306ccd47ad88c808eeb15d691d0fb-rootfs.mount: Deactivated successfully. Dec 12 17:29:20.339325 containerd[1523]: time="2025-12-12T17:29:20.338047626Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:29:20.359721 containerd[1523]: time="2025-12-12T17:29:20.358470909Z" level=info msg="Container dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:20.373573 containerd[1523]: time="2025-12-12T17:29:20.372430499Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f\"" Dec 12 17:29:20.374800 containerd[1523]: time="2025-12-12T17:29:20.374673024Z" level=info msg="StartContainer for \"dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f\"" Dec 12 17:29:20.380640 containerd[1523]: time="2025-12-12T17:29:20.380592317Z" level=info msg="connecting to shim dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" protocol=ttrpc version=3 Dec 12 17:29:20.412138 systemd[1]: Started cri-containerd-dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f.scope - libcontainer container dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f. Dec 12 17:29:20.445689 systemd[1]: cri-containerd-dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f.scope: Deactivated successfully. Dec 12 17:29:20.451145 containerd[1523]: time="2025-12-12T17:29:20.451075708Z" level=info msg="received container exit event container_id:\"dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f\" id:\"dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f\" pid:4643 exited_at:{seconds:1765560560 nanos:448193941}" Dec 12 17:29:20.452159 containerd[1523]: time="2025-12-12T17:29:20.451267868Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod689ce4a1_2bd6_461f_af5b_61b8460eef44.slice/cri-containerd-dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f.scope/memory.events\": no such file or directory" Dec 12 17:29:20.460222 containerd[1523]: time="2025-12-12T17:29:20.460172927Z" level=info msg="StartContainer for \"dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f\" returns successfully" Dec 12 17:29:20.477578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dab783cc845ab4d4e04502c0883ef754274a71e23f926b243f6bdd111e12656f-rootfs.mount: Deactivated successfully. Dec 12 17:29:21.348609 containerd[1523]: time="2025-12-12T17:29:21.348553770Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:29:21.363875 containerd[1523]: time="2025-12-12T17:29:21.363158808Z" level=info msg="Container 970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:21.368191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901348698.mount: Deactivated successfully. Dec 12 17:29:21.378378 containerd[1523]: time="2025-12-12T17:29:21.378318729Z" level=info msg="CreateContainer within sandbox \"3191fd154f05d3190f761cae492c3f264c7a2b84f6556d651a4685f0e07e951f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1\"" Dec 12 17:29:21.381876 containerd[1523]: time="2025-12-12T17:29:21.379720973Z" level=info msg="StartContainer for \"970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1\"" Dec 12 17:29:21.382698 containerd[1523]: time="2025-12-12T17:29:21.382569100Z" level=info msg="connecting to shim 970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1" address="unix:///run/containerd/s/9ab93fcf03bc466356495e058a7ba29e5d9cc4d4e367e4e0652d781ea1354d6d" protocol=ttrpc version=3 Dec 12 17:29:21.413370 systemd[1]: Started cri-containerd-970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1.scope - libcontainer container 970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1. Dec 12 17:29:21.475032 containerd[1523]: time="2025-12-12T17:29:21.474968946Z" level=info msg="StartContainer for \"970899fd8acefaa95f0ac604320a1420f6150a394214abb3b07c8ededb7198e1\" returns successfully" Dec 12 17:29:21.827907 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:29:24.992373 systemd-networkd[1412]: lxc_health: Link UP Dec 12 17:29:25.001550 systemd-networkd[1412]: lxc_health: Gained carrier Dec 12 17:29:25.197727 kubelet[2742]: I1212 17:29:25.197533 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9m242" podStartSLOduration=10.197514495 podStartE2EDuration="10.197514495s" podCreationTimestamp="2025-12-12 17:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:22.382357235 +0000 UTC m=+210.080787542" watchObservedRunningTime="2025-12-12 17:29:25.197514495 +0000 UTC m=+212.895944802" Dec 12 17:29:26.867201 systemd-networkd[1412]: lxc_health: Gained IPv6LL Dec 12 17:29:27.168296 update_engine[1495]: I20251212 17:29:27.167891 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:27.170007 update_engine[1495]: I20251212 17:29:27.169958 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:27.170570 update_engine[1495]: I20251212 17:29:27.170538 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:27.171017 update_engine[1495]: E20251212 17:29:27.170991 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 17:29:27.171165 update_engine[1495]: I20251212 17:29:27.171149 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 17:29:32.300670 sshd[4584]: Connection closed by 139.178.89.65 port 48810 Dec 12 17:29:32.303685 sshd-session[4536]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:32.311518 systemd[1]: sshd@22-91.99.219.209:22-139.178.89.65:48810.service: Deactivated successfully. Dec 12 17:29:32.317371 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:29:32.320293 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:29:32.323485 systemd-logind[1494]: Removed session 23. Dec 12 17:29:33.832670 systemd[1]: Started sshd@23-91.99.219.209:22-94.156.152.7:45090.service - OpenSSH per-connection server daemon (94.156.152.7:45090). Dec 12 17:29:34.018753 sshd[5346]: Invalid user admin from 94.156.152.7 port 45090 Dec 12 17:29:34.054760 sshd[5346]: Connection closed by invalid user admin 94.156.152.7 port 45090 [preauth] Dec 12 17:29:34.057602 systemd[1]: sshd@23-91.99.219.209:22-94.156.152.7:45090.service: Deactivated successfully. Dec 12 17:29:37.167889 update_engine[1495]: I20251212 17:29:37.167654 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:37.167889 update_engine[1495]: I20251212 17:29:37.167806 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:37.170174 update_engine[1495]: I20251212 17:29:37.168536 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:37.170174 update_engine[1495]: E20251212 17:29:37.169106 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 17:29:37.170174 update_engine[1495]: I20251212 17:29:37.169306 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 17:29:46.639773 kubelet[2742]: E1212 17:29:46.639565 2742 controller.go:195] "Failed to update lease" err="Put \"https://91.99.219.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-4-c728b0285d?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 17:29:46.860560 systemd[1]: cri-containerd-e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c.scope: Deactivated successfully. Dec 12 17:29:46.861937 systemd[1]: cri-containerd-e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c.scope: Consumed 4.416s CPU time, 53.5M memory peak. Dec 12 17:29:46.863683 containerd[1523]: time="2025-12-12T17:29:46.863601792Z" level=info msg="received container exit event container_id:\"e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c\" id:\"e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c\" pid:2605 exit_status:1 exited_at:{seconds:1765560586 nanos:862665099}" Dec 12 17:29:46.889046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c-rootfs.mount: Deactivated successfully. Dec 12 17:29:46.903352 kubelet[2742]: E1212 17:29:46.903181 2742 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36450->10.0.0.2:2379: read: connection timed out" Dec 12 17:29:47.163925 update_engine[1495]: I20251212 17:29:47.162999 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:47.163925 update_engine[1495]: I20251212 17:29:47.163102 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:47.163925 update_engine[1495]: I20251212 17:29:47.163466 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:47.164586 update_engine[1495]: E20251212 17:29:47.164458 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 17:29:47.164647 update_engine[1495]: I20251212 17:29:47.164600 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:29:47.164647 update_engine[1495]: I20251212 17:29:47.164614 1495 omaha_request_action.cc:617] Omaha request response: Dec 12 17:29:47.164730 update_engine[1495]: E20251212 17:29:47.164711 1495 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 12 17:29:47.164761 update_engine[1495]: I20251212 17:29:47.164738 1495 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 12 17:29:47.164761 update_engine[1495]: I20251212 17:29:47.164745 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:29:47.164761 update_engine[1495]: I20251212 17:29:47.164752 1495 update_attempter.cc:306] Processing Done. Dec 12 17:29:47.164851 update_engine[1495]: E20251212 17:29:47.164767 1495 update_attempter.cc:619] Update failed. Dec 12 17:29:47.164851 update_engine[1495]: I20251212 17:29:47.164773 1495 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 12 17:29:47.164851 update_engine[1495]: I20251212 17:29:47.164779 1495 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 12 17:29:47.164851 update_engine[1495]: I20251212 17:29:47.164786 1495 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 12 17:29:47.164946 update_engine[1495]: I20251212 17:29:47.164904 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:29:47.164946 update_engine[1495]: I20251212 17:29:47.164933 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:29:47.164946 update_engine[1495]: I20251212 17:29:47.164939 1495 omaha_request_action.cc:272] Request: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.164946 update_engine[1495]: Dec 12 17:29:47.165126 update_engine[1495]: I20251212 17:29:47.164947 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:47.165126 update_engine[1495]: I20251212 17:29:47.164970 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:47.165876 locksmithd[1551]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.165901 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:47.166435 update_engine[1495]: E20251212 17:29:47.166282 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.166363 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.166373 1495 omaha_request_action.cc:617] Omaha request response: Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.166381 1495 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.166389 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:29:47.166435 update_engine[1495]: I20251212 17:29:47.166395 1495 update_attempter.cc:306] Processing Done. Dec 12 17:29:47.166786 update_engine[1495]: I20251212 17:29:47.166402 1495 update_attempter.cc:310] Error event sent. Dec 12 17:29:47.166786 update_engine[1495]: I20251212 17:29:47.166729 1495 update_check_scheduler.cc:74] Next update check in 49m59s Dec 12 17:29:47.167337 locksmithd[1551]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 12 17:29:47.438191 kubelet[2742]: I1212 17:29:47.436510 2742 scope.go:117] "RemoveContainer" containerID="e700f84fa52e091f0ba435abc96a91ba77adc3c92124e24dbc0b8c0edbb2bb3c" Dec 12 17:29:47.441780 containerd[1523]: time="2025-12-12T17:29:47.441707582Z" level=info msg="CreateContainer within sandbox \"dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:29:47.460208 containerd[1523]: time="2025-12-12T17:29:47.458016805Z" level=info msg="Container 07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:47.460790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622291836.mount: Deactivated successfully. Dec 12 17:29:47.475182 containerd[1523]: time="2025-12-12T17:29:47.475132119Z" level=info msg="CreateContainer within sandbox \"dc808d3763e2155da5f879567bdba829907188db4e1d59420975b9f27629992f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c\"" Dec 12 17:29:47.476011 containerd[1523]: time="2025-12-12T17:29:47.475976611Z" level=info msg="StartContainer for \"07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c\"" Dec 12 17:29:47.477708 containerd[1523]: time="2025-12-12T17:29:47.477659434Z" level=info msg="connecting to shim 07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c" address="unix:///run/containerd/s/b1941e3a470eccc4cb21c70e8a26689aa4a6a32904519debad96e37be2fb139f" protocol=ttrpc version=3 Dec 12 17:29:47.500089 systemd[1]: Started cri-containerd-07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c.scope - libcontainer container 07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c. Dec 12 17:29:47.547096 containerd[1523]: time="2025-12-12T17:29:47.547045904Z" level=info msg="StartContainer for \"07c0a34e1e34ad4f58812e005fe84034c5bdafc097f3175f9d172fce4ce55c2c\" returns successfully" Dec 12 17:29:52.494618 containerd[1523]: time="2025-12-12T17:29:52.494317161Z" level=info msg="StopPodSandbox for \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\"" Dec 12 17:29:52.495722 containerd[1523]: time="2025-12-12T17:29:52.495531700Z" level=info msg="TearDown network for sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" successfully" Dec 12 17:29:52.495722 containerd[1523]: time="2025-12-12T17:29:52.495579180Z" level=info msg="StopPodSandbox for \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" returns successfully" Dec 12 17:29:52.496867 containerd[1523]: time="2025-12-12T17:29:52.496336352Z" level=info msg="RemovePodSandbox for \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\"" Dec 12 17:29:52.496867 containerd[1523]: time="2025-12-12T17:29:52.496375073Z" level=info msg="Forcibly stopping sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\"" Dec 12 17:29:52.496867 containerd[1523]: time="2025-12-12T17:29:52.496488954Z" level=info msg="TearDown network for sandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" successfully" Dec 12 17:29:52.497915 containerd[1523]: time="2025-12-12T17:29:52.497878336Z" level=info msg="Ensure that sandbox 057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa in task-service has been cleanup successfully" Dec 12 17:29:52.503711 containerd[1523]: time="2025-12-12T17:29:52.503659624Z" level=info msg="RemovePodSandbox \"057b8baef41e07f4c2bcdc0a254d325154bc4c8036aa92d74dc22e5b68d609aa\" returns successfully" Dec 12 17:29:52.505444 containerd[1523]: time="2025-12-12T17:29:52.505384251Z" level=info msg="StopPodSandbox for \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\"" Dec 12 17:29:52.505606 containerd[1523]: time="2025-12-12T17:29:52.505540493Z" level=info msg="TearDown network for sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" successfully" Dec 12 17:29:52.505606 containerd[1523]: time="2025-12-12T17:29:52.505554094Z" level=info msg="StopPodSandbox for \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" returns successfully" Dec 12 17:29:52.506866 containerd[1523]: time="2025-12-12T17:29:52.505981940Z" level=info msg="RemovePodSandbox for \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\"" Dec 12 17:29:52.506866 containerd[1523]: time="2025-12-12T17:29:52.506016421Z" level=info msg="Forcibly stopping sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\"" Dec 12 17:29:52.506866 containerd[1523]: time="2025-12-12T17:29:52.506117462Z" level=info msg="TearDown network for sandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" successfully" Dec 12 17:29:52.507683 containerd[1523]: time="2025-12-12T17:29:52.507643326Z" level=info msg="Ensure that sandbox 6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b in task-service has been cleanup successfully" Dec 12 17:29:52.513503 containerd[1523]: time="2025-12-12T17:29:52.513196891Z" level=info msg="RemovePodSandbox \"6d8b6c56bdcd0c00323ba53d12fc0b5ec3e0212c2d61f31c456f5c5a5b3b638b\" returns successfully" Dec 12 17:29:52.791467 kubelet[2742]: E1212 17:29:52.790419 2742 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36294->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-2-4-c728b0285d.188087feca3c423d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-2-4-c728b0285d,UID:bb5e9e80b9a6994b084d505e0e43dac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-4-c728b0285d,},FirstTimestamp:2025-12-12 17:29:42.313624125 +0000 UTC m=+230.012054472,LastTimestamp:2025-12-12 17:29:42.313624125 +0000 UTC m=+230.012054472,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-4-c728b0285d,}" Dec 12 17:29:53.295479 systemd[1]: cri-containerd-df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec.scope: Deactivated successfully. Dec 12 17:29:53.296746 systemd[1]: cri-containerd-df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec.scope: Consumed 3.922s CPU time, 22.4M memory peak. Dec 12 17:29:53.299414 containerd[1523]: time="2025-12-12T17:29:53.299092728Z" level=info msg="received container exit event container_id:\"df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec\" id:\"df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec\" pid:2588 exit_status:1 exited_at:{seconds:1765560593 nanos:298505879}" Dec 12 17:29:53.328372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec-rootfs.mount: Deactivated successfully. Dec 12 17:29:53.471434 kubelet[2742]: I1212 17:29:53.471112 2742 scope.go:117] "RemoveContainer" containerID="df30395836fdfee4f2eb3569a111eea46c511edfae765140064758fd2c9decec" Dec 12 17:29:53.473948 containerd[1523]: time="2025-12-12T17:29:53.473804185Z" level=info msg="CreateContainer within sandbox \"f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:29:53.497917 containerd[1523]: time="2025-12-12T17:29:53.495749888Z" level=info msg="Container d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:53.499979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753520894.mount: Deactivated successfully. Dec 12 17:29:53.510846 containerd[1523]: time="2025-12-12T17:29:53.510748363Z" level=info msg="CreateContainer within sandbox \"f96b63a8aac4f857e075293729cb708cadea200dd807dc8d0037cfe83e0b454c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c\"" Dec 12 17:29:53.511847 containerd[1523]: time="2025-12-12T17:29:53.511751939Z" level=info msg="StartContainer for \"d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c\"" Dec 12 17:29:53.513716 containerd[1523]: time="2025-12-12T17:29:53.513652889Z" level=info msg="connecting to shim d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c" address="unix:///run/containerd/s/2800b4fb1448b0b1939f35726636c1ea4343422b819aa099796531cb6d91dfcc" protocol=ttrpc version=3 Dec 12 17:29:53.538626 systemd[1]: Started cri-containerd-d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c.scope - libcontainer container d28aa89cbb2785b99bbf989f25265d4895d7dfad731db527fc3a753fe0dfba6c.