Mar 12 23:44:14.809580 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 12 23:44:14.809605 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 12 22:07:21 -00 2026 Mar 12 23:44:14.809616 kernel: KASLR enabled Mar 12 23:44:14.809622 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 12 23:44:14.809627 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 12 23:44:14.809633 kernel: random: crng init done Mar 12 23:44:14.809640 kernel: secureboot: Secure boot disabled Mar 12 23:44:14.809645 kernel: ACPI: Early table checksum verification disabled Mar 12 23:44:14.809651 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 12 23:44:14.809657 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 12 23:44:14.809664 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809670 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809676 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809682 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809689 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809696 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809702 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809708 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809714 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 23:44:14.809720 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 23:44:14.809726 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 12 23:44:14.811554 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 23:44:14.811562 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 12 23:44:14.811568 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Mar 12 23:44:14.811574 kernel: Zone ranges: Mar 12 23:44:14.811581 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 12 23:44:14.811591 kernel: DMA32 empty Mar 12 23:44:14.811598 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 12 23:44:14.811604 kernel: Device empty Mar 12 23:44:14.811610 kernel: Movable zone start for each node Mar 12 23:44:14.811616 kernel: Early memory node ranges Mar 12 23:44:14.811622 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 12 23:44:14.811628 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 12 23:44:14.811634 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 12 23:44:14.811640 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 12 23:44:14.811646 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 12 23:44:14.811652 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 12 23:44:14.811658 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 12 23:44:14.811666 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 12 23:44:14.811672 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 12 23:44:14.811681 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 12 23:44:14.811687 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 12 23:44:14.811694 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Mar 12 23:44:14.811702 kernel: psci: probing for conduit method from ACPI. Mar 12 23:44:14.811708 kernel: psci: PSCIv1.1 detected in firmware. Mar 12 23:44:14.811715 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 23:44:14.811721 kernel: psci: Trusted OS migration not required Mar 12 23:44:14.811738 kernel: psci: SMC Calling Convention v1.1 Mar 12 23:44:14.811746 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 12 23:44:14.811765 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 23:44:14.811772 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 23:44:14.811779 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 23:44:14.811785 kernel: Detected PIPT I-cache on CPU0 Mar 12 23:44:14.811792 kernel: CPU features: detected: GIC system register CPU interface Mar 12 23:44:14.811801 kernel: CPU features: detected: Spectre-v4 Mar 12 23:44:14.811807 kernel: CPU features: detected: Spectre-BHB Mar 12 23:44:14.811814 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 23:44:14.811820 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 23:44:14.811827 kernel: CPU features: detected: ARM erratum 1418040 Mar 12 23:44:14.811833 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 23:44:14.811840 kernel: alternatives: applying boot alternatives Mar 12 23:44:14.811848 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:44:14.811855 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 23:44:14.811861 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 23:44:14.811868 kernel: Fallback order for Node 0: 0 Mar 12 23:44:14.811876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Mar 12 23:44:14.811883 kernel: Policy zone: Normal Mar 12 23:44:14.811889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 23:44:14.811896 kernel: software IO TLB: area num 2. Mar 12 23:44:14.811902 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Mar 12 23:44:14.811914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 23:44:14.811921 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 23:44:14.811928 kernel: rcu: RCU event tracing is enabled. Mar 12 23:44:14.811935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 23:44:14.811941 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 23:44:14.811948 kernel: Tracing variant of Tasks RCU enabled. Mar 12 23:44:14.811954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 23:44:14.811963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 23:44:14.811969 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:44:14.811976 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 23:44:14.811982 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 23:44:14.811989 kernel: GICv3: 256 SPIs implemented Mar 12 23:44:14.811995 kernel: GICv3: 0 Extended SPIs implemented Mar 12 23:44:14.812001 kernel: Root IRQ handler: gic_handle_irq Mar 12 23:44:14.812008 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 12 23:44:14.812014 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 12 23:44:14.812021 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 12 23:44:14.812027 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 12 23:44:14.812035 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Mar 12 23:44:14.812042 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Mar 12 23:44:14.812049 kernel: GICv3: using LPI property table @0x0000000100120000 Mar 12 23:44:14.812055 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Mar 12 23:44:14.812062 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 23:44:14.812068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 23:44:14.812075 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 12 23:44:14.812082 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 12 23:44:14.812088 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 12 23:44:14.812095 kernel: Console: colour dummy device 80x25 Mar 12 23:44:14.812102 kernel: ACPI: Core revision 20240827 Mar 12 23:44:14.812110 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 12 23:44:14.812117 kernel: pid_max: default: 32768 minimum: 301 Mar 12 23:44:14.812123 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 23:44:14.812130 kernel: landlock: Up and running. Mar 12 23:44:14.812137 kernel: SELinux: Initializing. Mar 12 23:44:14.812143 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:44:14.812150 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 23:44:14.812157 kernel: rcu: Hierarchical SRCU implementation. Mar 12 23:44:14.812164 kernel: rcu: Max phase no-delay instances is 400. Mar 12 23:44:14.812172 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 23:44:14.812179 kernel: Remapping and enabling EFI services. Mar 12 23:44:14.812185 kernel: smp: Bringing up secondary CPUs ... Mar 12 23:44:14.812192 kernel: Detected PIPT I-cache on CPU1 Mar 12 23:44:14.812199 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 12 23:44:14.812205 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Mar 12 23:44:14.812212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 23:44:14.812218 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 12 23:44:14.812225 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 23:44:14.812233 kernel: SMP: Total of 2 processors activated. Mar 12 23:44:14.812244 kernel: CPU: All CPU(s) started at EL1 Mar 12 23:44:14.812251 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 23:44:14.812260 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 23:44:14.812267 kernel: CPU features: detected: Common not Private translations Mar 12 23:44:14.812274 kernel: CPU features: detected: CRC32 instructions Mar 12 23:44:14.812281 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 12 23:44:14.812288 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 23:44:14.812297 kernel: CPU features: detected: LSE atomic instructions Mar 12 23:44:14.812312 kernel: CPU features: detected: Privileged Access Never Mar 12 23:44:14.812320 kernel: CPU features: detected: RAS Extension Support Mar 12 23:44:14.812327 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 12 23:44:14.812335 kernel: alternatives: applying system-wide alternatives Mar 12 23:44:14.812342 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 12 23:44:14.812349 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Mar 12 23:44:14.812356 kernel: devtmpfs: initialized Mar 12 23:44:14.812364 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 23:44:14.812373 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 23:44:14.812380 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 23:44:14.812387 kernel: 0 pages in range for non-PLT usage Mar 12 23:44:14.812394 kernel: 508400 pages in range for PLT usage Mar 12 23:44:14.812401 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 23:44:14.812408 kernel: SMBIOS 3.0.0 present. Mar 12 23:44:14.812415 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 12 23:44:14.812422 kernel: DMI: Memory slots populated: 1/1 Mar 12 23:44:14.812429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 23:44:14.812437 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 23:44:14.812445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 23:44:14.812452 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 23:44:14.812459 kernel: audit: initializing netlink subsys (disabled) Mar 12 23:44:14.812466 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Mar 12 23:44:14.812473 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 23:44:14.812480 kernel: cpuidle: using governor menu Mar 12 23:44:14.812487 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 23:44:14.812494 kernel: ASID allocator initialised with 32768 entries Mar 12 23:44:14.812503 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 23:44:14.812510 kernel: Serial: AMBA PL011 UART driver Mar 12 23:44:14.812517 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 23:44:14.812524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 23:44:14.812531 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 23:44:14.812538 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 23:44:14.812545 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 23:44:14.812552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 23:44:14.812559 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 23:44:14.812568 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 23:44:14.812575 kernel: ACPI: Added _OSI(Module Device) Mar 12 23:44:14.812582 kernel: ACPI: Added _OSI(Processor Device) Mar 12 23:44:14.812589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 23:44:14.812596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 23:44:14.812603 kernel: ACPI: Interpreter enabled Mar 12 23:44:14.812610 kernel: ACPI: Using GIC for interrupt routing Mar 12 23:44:14.812618 kernel: ACPI: MCFG table detected, 1 entries Mar 12 23:44:14.812625 kernel: ACPI: CPU0 has been hot-added Mar 12 23:44:14.812633 kernel: ACPI: CPU1 has been hot-added Mar 12 23:44:14.812641 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 12 23:44:14.812648 kernel: printk: legacy console [ttyAMA0] enabled Mar 12 23:44:14.812655 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 23:44:14.813862 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 23:44:14.813942 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 23:44:14.814002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 23:44:14.814065 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 12 23:44:14.814138 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 12 23:44:14.814149 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 12 23:44:14.814157 kernel: PCI host bridge to bus 0000:00 Mar 12 23:44:14.814228 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 12 23:44:14.814283 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 12 23:44:14.814352 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 12 23:44:14.814407 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 23:44:14.814487 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Mar 12 23:44:14.814559 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Mar 12 23:44:14.814619 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Mar 12 23:44:14.814678 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Mar 12 23:44:14.816139 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.816223 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Mar 12 23:44:14.816295 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 12 23:44:14.816399 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Mar 12 23:44:14.816462 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Mar 12 23:44:14.816529 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.816588 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Mar 12 23:44:14.816648 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 12 23:44:14.816714 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Mar 12 23:44:14.816830 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.816899 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Mar 12 23:44:14.816959 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 12 23:44:14.817017 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Mar 12 23:44:14.817075 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Mar 12 23:44:14.817142 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.817201 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Mar 12 23:44:14.817264 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 12 23:44:14.817333 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Mar 12 23:44:14.817393 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Mar 12 23:44:14.817463 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.817523 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Mar 12 23:44:14.817582 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 12 23:44:14.817640 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 12 23:44:14.817702 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Mar 12 23:44:14.819870 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.819950 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Mar 12 23:44:14.820012 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 12 23:44:14.820071 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Mar 12 23:44:14.820129 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Mar 12 23:44:14.820197 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.820262 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Mar 12 23:44:14.820342 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 12 23:44:14.820405 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Mar 12 23:44:14.820464 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Mar 12 23:44:14.820529 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.820588 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Mar 12 23:44:14.820649 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 12 23:44:14.820709 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Mar 12 23:44:14.820791 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 12 23:44:14.820852 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Mar 12 23:44:14.820912 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 12 23:44:14.820970 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Mar 12 23:44:14.821039 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Mar 12 23:44:14.821103 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Mar 12 23:44:14.821175 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 12 23:44:14.821238 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Mar 12 23:44:14.821300 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Mar 12 23:44:14.821400 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 12 23:44:14.821472 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Mar 12 23:44:14.821533 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Mar 12 23:44:14.821607 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Mar 12 23:44:14.821667 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Mar 12 23:44:14.823776 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Mar 12 23:44:14.823916 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Mar 12 23:44:14.823984 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Mar 12 23:44:14.824056 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 12 23:44:14.824128 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Mar 12 23:44:14.824190 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Mar 12 23:44:14.824258 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Mar 12 23:44:14.824355 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Mar 12 23:44:14.824424 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Mar 12 23:44:14.824497 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 12 23:44:14.824560 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Mar 12 23:44:14.824625 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Mar 12 23:44:14.824685 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 12 23:44:14.826826 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 12 23:44:14.826909 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 12 23:44:14.826972 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 12 23:44:14.827037 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 12 23:44:14.827099 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 12 23:44:14.827171 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 12 23:44:14.827237 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 12 23:44:14.827295 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 12 23:44:14.827397 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 12 23:44:14.827467 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 12 23:44:14.827526 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 12 23:44:14.827589 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 12 23:44:14.827653 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 12 23:44:14.827712 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 12 23:44:14.827822 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 12 23:44:14.827890 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 12 23:44:14.827949 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 12 23:44:14.828008 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 12 23:44:14.828074 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 12 23:44:14.828133 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 12 23:44:14.828192 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 12 23:44:14.828255 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 12 23:44:14.828330 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 12 23:44:14.828396 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 12 23:44:14.828460 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 12 23:44:14.828527 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 12 23:44:14.828588 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 12 23:44:14.828649 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Mar 12 23:44:14.828710 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Mar 12 23:44:14.830849 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Mar 12 23:44:14.830928 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Mar 12 23:44:14.830993 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Mar 12 23:44:14.831061 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Mar 12 23:44:14.831125 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Mar 12 23:44:14.831183 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Mar 12 23:44:14.831244 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Mar 12 23:44:14.831302 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Mar 12 23:44:14.831403 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Mar 12 23:44:14.831464 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Mar 12 23:44:14.831526 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Mar 12 23:44:14.831590 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Mar 12 23:44:14.831650 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Mar 12 23:44:14.831709 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Mar 12 23:44:14.831796 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Mar 12 23:44:14.831858 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Mar 12 23:44:14.831923 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Mar 12 23:44:14.831981 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Mar 12 23:44:14.832040 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Mar 12 23:44:14.832102 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Mar 12 23:44:14.832169 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Mar 12 23:44:14.832230 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Mar 12 23:44:14.832288 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Mar 12 23:44:14.832363 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Mar 12 23:44:14.832424 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Mar 12 23:44:14.832482 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Mar 12 23:44:14.832540 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Mar 12 23:44:14.832597 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Mar 12 23:44:14.832657 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Mar 12 23:44:14.832714 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Mar 12 23:44:14.834840 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Mar 12 23:44:14.834921 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Mar 12 23:44:14.834983 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Mar 12 23:44:14.835043 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Mar 12 23:44:14.835105 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Mar 12 23:44:14.835163 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Mar 12 23:44:14.835227 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Mar 12 23:44:14.835298 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Mar 12 23:44:14.835382 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Mar 12 23:44:14.835449 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Mar 12 23:44:14.835510 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 12 23:44:14.835569 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 12 23:44:14.835628 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 12 23:44:14.835686 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 12 23:44:14.835773 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Mar 12 23:44:14.835839 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 12 23:44:14.835903 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 12 23:44:14.835962 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 12 23:44:14.836021 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 12 23:44:14.836089 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Mar 12 23:44:14.836151 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Mar 12 23:44:14.836211 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 12 23:44:14.836271 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 12 23:44:14.836349 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 12 23:44:14.836412 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 12 23:44:14.836481 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Mar 12 23:44:14.836543 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 12 23:44:14.836602 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 12 23:44:14.836672 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 12 23:44:14.838805 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 12 23:44:14.838942 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Mar 12 23:44:14.839008 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Mar 12 23:44:14.839072 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 12 23:44:14.839132 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 12 23:44:14.839191 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 12 23:44:14.839249 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 12 23:44:14.839332 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Mar 12 23:44:14.839405 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Mar 12 23:44:14.839472 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 12 23:44:14.839553 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 12 23:44:14.839616 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 12 23:44:14.839675 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 12 23:44:14.839759 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Mar 12 23:44:14.839826 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Mar 12 23:44:14.839893 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Mar 12 23:44:14.839955 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 12 23:44:14.840016 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 12 23:44:14.840074 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 12 23:44:14.840135 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 12 23:44:14.840198 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 12 23:44:14.840256 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 12 23:44:14.840350 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 12 23:44:14.840415 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 12 23:44:14.840479 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 12 23:44:14.840537 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 12 23:44:14.840599 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 12 23:44:14.840657 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 12 23:44:14.840720 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 12 23:44:14.842444 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 12 23:44:14.842510 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 12 23:44:14.842580 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 12 23:44:14.842638 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 12 23:44:14.842699 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 12 23:44:14.842799 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 12 23:44:14.842860 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 12 23:44:14.842915 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 12 23:44:14.842979 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 12 23:44:14.843034 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 12 23:44:14.843092 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 12 23:44:14.843154 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 12 23:44:14.843209 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 12 23:44:14.843263 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 12 23:44:14.843340 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 12 23:44:14.843399 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 12 23:44:14.843453 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 12 23:44:14.843524 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 12 23:44:14.843583 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 12 23:44:14.843637 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 12 23:44:14.843698 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 12 23:44:14.845442 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 12 23:44:14.845520 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 12 23:44:14.845586 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 12 23:44:14.845649 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 12 23:44:14.845702 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 12 23:44:14.845869 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 12 23:44:14.845933 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 12 23:44:14.845988 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 12 23:44:14.845997 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 12 23:44:14.846005 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 12 23:44:14.846017 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 12 23:44:14.846025 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 12 23:44:14.846032 kernel: iommu: Default domain type: Translated Mar 12 23:44:14.846040 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 23:44:14.846047 kernel: efivars: Registered efivars operations Mar 12 23:44:14.846055 kernel: vgaarb: loaded Mar 12 23:44:14.846062 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 23:44:14.846069 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 23:44:14.846077 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 23:44:14.846086 kernel: pnp: PnP ACPI init Mar 12 23:44:14.846161 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 12 23:44:14.846173 kernel: pnp: PnP ACPI: found 1 devices Mar 12 23:44:14.846180 kernel: NET: Registered PF_INET protocol family Mar 12 23:44:14.846188 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 23:44:14.846195 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 23:44:14.846203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 23:44:14.846210 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 23:44:14.846220 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 23:44:14.846228 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 23:44:14.846235 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:44:14.846243 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 23:44:14.846250 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 23:44:14.846337 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 12 23:44:14.846350 kernel: PCI: CLS 0 bytes, default 64 Mar 12 23:44:14.846358 kernel: kvm [1]: HYP mode not available Mar 12 23:44:14.846365 kernel: Initialise system trusted keyrings Mar 12 23:44:14.846376 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 23:44:14.846383 kernel: Key type asymmetric registered Mar 12 23:44:14.846391 kernel: Asymmetric key parser 'x509' registered Mar 12 23:44:14.846398 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 23:44:14.846405 kernel: io scheduler mq-deadline registered Mar 12 23:44:14.846413 kernel: io scheduler kyber registered Mar 12 23:44:14.846420 kernel: io scheduler bfq registered Mar 12 23:44:14.846429 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 12 23:44:14.846499 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 12 23:44:14.846564 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 12 23:44:14.846624 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.846685 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 12 23:44:14.846768 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 12 23:44:14.846834 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.846899 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 12 23:44:14.846960 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 12 23:44:14.847020 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.847086 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 12 23:44:14.847145 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 12 23:44:14.847204 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.847267 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 12 23:44:14.847344 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 12 23:44:14.847406 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.847469 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 12 23:44:14.847532 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 12 23:44:14.847590 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.847653 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 12 23:44:14.847715 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 12 23:44:14.847817 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.847883 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 12 23:44:14.847943 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 12 23:44:14.848002 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.848017 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 12 23:44:14.848078 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 12 23:44:14.848138 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 12 23:44:14.848196 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 23:44:14.848206 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 12 23:44:14.848214 kernel: ACPI: button: Power Button [PWRB] Mar 12 23:44:14.848221 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 12 23:44:14.848286 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 12 23:44:14.848405 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 12 23:44:14.848419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 23:44:14.848427 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 12 23:44:14.848493 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 12 23:44:14.848503 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 12 23:44:14.848511 kernel: thunder_xcv, ver 1.0 Mar 12 23:44:14.848519 kernel: thunder_bgx, ver 1.0 Mar 12 23:44:14.848526 kernel: nicpf, ver 1.0 Mar 12 23:44:14.848534 kernel: nicvf, ver 1.0 Mar 12 23:44:14.848612 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 23:44:14.848674 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T23:44:14 UTC (1773359054) Mar 12 23:44:14.848684 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 23:44:14.848691 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 12 23:44:14.848699 kernel: watchdog: NMI not fully supported Mar 12 23:44:14.848707 kernel: watchdog: Hard watchdog permanently disabled Mar 12 23:44:14.848714 kernel: NET: Registered PF_INET6 protocol family Mar 12 23:44:14.848722 kernel: Segment Routing with IPv6 Mar 12 23:44:14.848752 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 23:44:14.848760 kernel: NET: Registered PF_PACKET protocol family Mar 12 23:44:14.848768 kernel: Key type dns_resolver registered Mar 12 23:44:14.848775 kernel: registered taskstats version 1 Mar 12 23:44:14.848783 kernel: Loading compiled-in X.509 certificates Mar 12 23:44:14.848790 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 653709f5ad64856a37b70c07139630123477ee1c' Mar 12 23:44:14.848798 kernel: Demotion targets for Node 0: null Mar 12 23:44:14.848805 kernel: Key type .fscrypt registered Mar 12 23:44:14.848812 kernel: Key type fscrypt-provisioning registered Mar 12 23:44:14.848822 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 23:44:14.848829 kernel: ima: Allocated hash algorithm: sha1 Mar 12 23:44:14.848836 kernel: ima: No architecture policies found Mar 12 23:44:14.848844 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 23:44:14.848851 kernel: clk: Disabling unused clocks Mar 12 23:44:14.848859 kernel: PM: genpd: Disabling unused power domains Mar 12 23:44:14.848866 kernel: Warning: unable to open an initial console. Mar 12 23:44:14.848874 kernel: Freeing unused kernel memory: 39552K Mar 12 23:44:14.848882 kernel: Run /init as init process Mar 12 23:44:14.848889 kernel: with arguments: Mar 12 23:44:14.848898 kernel: /init Mar 12 23:44:14.848905 kernel: with environment: Mar 12 23:44:14.848913 kernel: HOME=/ Mar 12 23:44:14.848920 kernel: TERM=linux Mar 12 23:44:14.848928 systemd[1]: Successfully made /usr/ read-only. Mar 12 23:44:14.848940 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:44:14.848948 systemd[1]: Detected virtualization kvm. Mar 12 23:44:14.848957 systemd[1]: Detected architecture arm64. Mar 12 23:44:14.848965 systemd[1]: Running in initrd. Mar 12 23:44:14.848972 systemd[1]: No hostname configured, using default hostname. Mar 12 23:44:14.848980 systemd[1]: Hostname set to . Mar 12 23:44:14.848988 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:44:14.848995 systemd[1]: Queued start job for default target initrd.target. Mar 12 23:44:14.849003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:44:14.849011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:44:14.849021 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 23:44:14.849029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:44:14.849038 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 23:44:14.849048 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 23:44:14.849056 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 23:44:14.849065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 23:44:14.849072 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:44:14.849082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:44:14.849089 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:44:14.849097 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:44:14.849105 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:44:14.849113 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:44:14.849120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:44:14.849128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:44:14.849136 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 23:44:14.849144 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 23:44:14.849154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:44:14.849162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:44:14.849170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:44:14.849177 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:44:14.849185 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 23:44:14.849193 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:44:14.849201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 23:44:14.849209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 23:44:14.849219 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 23:44:14.849227 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:44:14.849235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:44:14.849243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:44:14.849250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 23:44:14.849259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:44:14.849268 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 23:44:14.849276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 23:44:14.849322 systemd-journald[245]: Collecting audit messages is disabled. Mar 12 23:44:14.849347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:14.849356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 23:44:14.849364 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 23:44:14.849372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:44:14.849380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 23:44:14.849388 kernel: Bridge firewalling registered Mar 12 23:44:14.849396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:44:14.849406 systemd-journald[245]: Journal started Mar 12 23:44:14.849425 systemd-journald[245]: Runtime Journal (/run/log/journal/a0c01f1e1a994d39aed685be440699ba) is 8M, max 76.5M, 68.5M free. Mar 12 23:44:14.817805 systemd-modules-load[246]: Inserted module 'overlay' Mar 12 23:44:14.853292 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:44:14.844346 systemd-modules-load[246]: Inserted module 'br_netfilter' Mar 12 23:44:14.854592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:44:14.860218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:44:14.867949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:44:14.871426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:44:14.877147 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 23:44:14.880015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 23:44:14.882799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:44:14.885194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:44:14.888274 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:44:14.911884 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 12 23:44:14.932672 systemd-resolved[286]: Positive Trust Anchors: Mar 12 23:44:14.933313 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:44:14.933348 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:44:14.942431 systemd-resolved[286]: Defaulting to hostname 'linux'. Mar 12 23:44:14.943681 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:44:14.944428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:44:15.009814 kernel: SCSI subsystem initialized Mar 12 23:44:15.013794 kernel: Loading iSCSI transport class v2.0-870. Mar 12 23:44:15.022142 kernel: iscsi: registered transport (tcp) Mar 12 23:44:15.035016 kernel: iscsi: registered transport (qla4xxx) Mar 12 23:44:15.035123 kernel: QLogic iSCSI HBA Driver Mar 12 23:44:15.058709 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:44:15.080069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:44:15.085365 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:44:15.140795 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 23:44:15.142551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 23:44:15.207809 kernel: raid6: neonx8 gen() 15534 MB/s Mar 12 23:44:15.224984 kernel: raid6: neonx4 gen() 13901 MB/s Mar 12 23:44:15.241814 kernel: raid6: neonx2 gen() 13158 MB/s Mar 12 23:44:15.258793 kernel: raid6: neonx1 gen() 10395 MB/s Mar 12 23:44:15.275788 kernel: raid6: int64x8 gen() 6867 MB/s Mar 12 23:44:15.292823 kernel: raid6: int64x4 gen() 7318 MB/s Mar 12 23:44:15.309774 kernel: raid6: int64x2 gen() 6070 MB/s Mar 12 23:44:15.326786 kernel: raid6: int64x1 gen() 5022 MB/s Mar 12 23:44:15.326832 kernel: raid6: using algorithm neonx8 gen() 15534 MB/s Mar 12 23:44:15.343817 kernel: raid6: .... xor() 11954 MB/s, rmw enabled Mar 12 23:44:15.343953 kernel: raid6: using neon recovery algorithm Mar 12 23:44:15.348946 kernel: xor: measuring software checksum speed Mar 12 23:44:15.349010 kernel: 8regs : 21618 MB/sec Mar 12 23:44:15.349022 kernel: 32regs : 21710 MB/sec Mar 12 23:44:15.349033 kernel: arm64_neon : 28147 MB/sec Mar 12 23:44:15.349754 kernel: xor: using function: arm64_neon (28147 MB/sec) Mar 12 23:44:15.403777 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 23:44:15.411812 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:44:15.415573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:44:15.444669 systemd-udevd[494]: Using default interface naming scheme 'v255'. Mar 12 23:44:15.449218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:44:15.457147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 23:44:15.485782 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Mar 12 23:44:15.517860 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:44:15.521265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:44:15.583452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:44:15.587397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 23:44:15.689783 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Mar 12 23:44:15.691509 kernel: scsi host0: Virtio SCSI HBA Mar 12 23:44:15.696979 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 23:44:15.697009 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 12 23:44:15.706152 kernel: ACPI: bus type USB registered Mar 12 23:44:15.706206 kernel: usbcore: registered new interface driver usbfs Mar 12 23:44:15.710445 kernel: usbcore: registered new interface driver hub Mar 12 23:44:15.713765 kernel: usbcore: registered new device driver usb Mar 12 23:44:15.721689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:44:15.721803 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:15.724339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:44:15.726102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:44:15.739009 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 12 23:44:15.740145 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 12 23:44:15.740318 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 12 23:44:15.741331 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 12 23:44:15.741488 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 12 23:44:15.753061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 23:44:15.753121 kernel: GPT:17805311 != 80003071 Mar 12 23:44:15.753131 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 23:44:15.753140 kernel: GPT:17805311 != 80003071 Mar 12 23:44:15.754062 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 23:44:15.754092 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 23:44:15.755811 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 12 23:44:15.762766 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 12 23:44:15.765306 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 12 23:44:15.765519 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 23:44:15.766755 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 12 23:44:15.766912 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 12 23:44:15.769123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:15.772835 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 12 23:44:15.773012 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 12 23:44:15.773090 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 12 23:44:15.773170 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 12 23:44:15.773839 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 12 23:44:15.774880 kernel: hub 1-0:1.0: USB hub found Mar 12 23:44:15.775028 kernel: hub 1-0:1.0: 4 ports detected Mar 12 23:44:15.777329 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 12 23:44:15.779768 kernel: hub 2-0:1.0: USB hub found Mar 12 23:44:15.779989 kernel: hub 2-0:1.0: 4 ports detected Mar 12 23:44:15.824752 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 12 23:44:15.843158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 12 23:44:15.852511 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 12 23:44:15.853376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 12 23:44:15.868416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 12 23:44:15.871095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 23:44:15.888804 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 23:44:15.889617 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:44:15.894201 disk-uuid[602]: Primary Header is updated. Mar 12 23:44:15.894201 disk-uuid[602]: Secondary Entries is updated. Mar 12 23:44:15.894201 disk-uuid[602]: Secondary Header is updated. Mar 12 23:44:15.891627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:44:15.892540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:44:15.894304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 23:44:15.902809 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 23:44:15.916856 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 23:44:15.925408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:44:16.013765 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 12 23:44:16.144986 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 12 23:44:16.145048 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 12 23:44:16.145224 kernel: usbcore: registered new interface driver usbhid Mar 12 23:44:16.145787 kernel: usbhid: USB HID core driver Mar 12 23:44:16.250808 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 12 23:44:16.378790 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 12 23:44:16.432034 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 12 23:44:16.929891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 23:44:16.930780 disk-uuid[605]: The operation has completed successfully. Mar 12 23:44:16.984421 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 23:44:16.985775 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 23:44:17.009562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 23:44:17.027887 sh[629]: Success Mar 12 23:44:17.047573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 23:44:17.047632 kernel: device-mapper: uevent: version 1.0.3 Mar 12 23:44:17.047651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 23:44:17.056808 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 23:44:17.102836 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 23:44:17.106323 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 23:44:17.120301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 23:44:17.130776 kernel: BTRFS: device fsid fcbb17b2-5053-44fc-82f0-b24e4919d6d8 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (641) Mar 12 23:44:17.135786 kernel: BTRFS info (device dm-0): first mount of filesystem fcbb17b2-5053-44fc-82f0-b24e4919d6d8 Mar 12 23:44:17.135859 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:44:17.144771 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 12 23:44:17.144834 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 23:44:17.145743 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 23:44:17.147593 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 23:44:17.149665 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:44:17.152009 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 23:44:17.153189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 23:44:17.156990 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 23:44:17.180848 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (668) Mar 12 23:44:17.182985 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:44:17.183037 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:44:17.190120 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 12 23:44:17.190182 kernel: BTRFS info (device sda6): turning on async discard Mar 12 23:44:17.190196 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 23:44:17.196867 kernel: BTRFS info (device sda6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:44:17.198450 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 23:44:17.201928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 23:44:17.317116 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:44:17.319189 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:44:17.350887 ignition[717]: Ignition 2.22.0 Mar 12 23:44:17.350903 ignition[717]: Stage: fetch-offline Mar 12 23:44:17.353674 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:44:17.350940 ignition[717]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:17.350948 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:17.351032 ignition[717]: parsed url from cmdline: "" Mar 12 23:44:17.351035 ignition[717]: no config URL provided Mar 12 23:44:17.351039 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 23:44:17.351046 ignition[717]: no config at "/usr/lib/ignition/user.ign" Mar 12 23:44:17.351053 ignition[717]: failed to fetch config: resource requires networking Mar 12 23:44:17.351213 ignition[717]: Ignition finished successfully Mar 12 23:44:17.364528 systemd-networkd[819]: lo: Link UP Mar 12 23:44:17.364542 systemd-networkd[819]: lo: Gained carrier Mar 12 23:44:17.366416 systemd-networkd[819]: Enumeration completed Mar 12 23:44:17.366947 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:17.366951 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:44:17.367332 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:44:17.367396 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:17.367399 systemd-networkd[819]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:44:17.367669 systemd-networkd[819]: eth0: Link UP Mar 12 23:44:17.367777 systemd-networkd[819]: eth1: Link UP Mar 12 23:44:17.368697 systemd[1]: Reached target network.target - Network. Mar 12 23:44:17.369064 systemd-networkd[819]: eth0: Gained carrier Mar 12 23:44:17.369075 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:17.373606 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 23:44:17.380218 systemd-networkd[819]: eth1: Gained carrier Mar 12 23:44:17.380234 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:17.407680 ignition[823]: Ignition 2.22.0 Mar 12 23:44:17.407700 ignition[823]: Stage: fetch Mar 12 23:44:17.407911 ignition[823]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:17.407922 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:17.408010 ignition[823]: parsed url from cmdline: "" Mar 12 23:44:17.408013 ignition[823]: no config URL provided Mar 12 23:44:17.408019 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 23:44:17.408030 ignition[823]: no config at "/usr/lib/ignition/user.ign" Mar 12 23:44:17.408062 ignition[823]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 12 23:44:17.408558 ignition[823]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 12 23:44:17.414829 systemd-networkd[819]: eth0: DHCPv4 address 159.69.55.216/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 12 23:44:17.420872 systemd-networkd[819]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 12 23:44:17.609624 ignition[823]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 12 23:44:17.616979 ignition[823]: GET result: OK Mar 12 23:44:17.617307 ignition[823]: parsing config with SHA512: 10d6715cdeb696fafb225b6b06ce3c75214f11333bb3b719e2f1b1420660779b19f1bbb0c47237e7d1a3c2fd81bb513399c6b3513f3e0cfd7b2a73ed06082948 Mar 12 23:44:17.625892 unknown[823]: fetched base config from "system" Mar 12 23:44:17.625920 unknown[823]: fetched base config from "system" Mar 12 23:44:17.625939 unknown[823]: fetched user config from "hetzner" Mar 12 23:44:17.627414 ignition[823]: fetch: fetch complete Mar 12 23:44:17.627424 ignition[823]: fetch: fetch passed Mar 12 23:44:17.627511 ignition[823]: Ignition finished successfully Mar 12 23:44:17.630765 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 23:44:17.633482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 23:44:17.672490 ignition[830]: Ignition 2.22.0 Mar 12 23:44:17.672509 ignition[830]: Stage: kargs Mar 12 23:44:17.673450 ignition[830]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:17.673466 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:17.674453 ignition[830]: kargs: kargs passed Mar 12 23:44:17.674701 ignition[830]: Ignition finished successfully Mar 12 23:44:17.679875 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 23:44:17.684944 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 23:44:17.721603 ignition[836]: Ignition 2.22.0 Mar 12 23:44:17.721622 ignition[836]: Stage: disks Mar 12 23:44:17.721793 ignition[836]: no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:17.721803 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:17.722607 ignition[836]: disks: disks passed Mar 12 23:44:17.722655 ignition[836]: Ignition finished successfully Mar 12 23:44:17.724808 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 23:44:17.726086 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 23:44:17.729000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 23:44:17.730614 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:44:17.732356 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:44:17.733585 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:44:17.735605 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 23:44:17.766314 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 12 23:44:17.770891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 23:44:17.773645 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 23:44:17.858768 kernel: EXT4-fs (sda9): mounted filesystem 4b09db19-3beb-48c2-8dcb-3eec5602206c r/w with ordered data mode. Quota mode: none. Mar 12 23:44:17.860179 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 23:44:17.862169 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 23:44:17.866084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:44:17.868347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 23:44:17.875929 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 23:44:17.880919 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 23:44:17.880962 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:44:17.883831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 23:44:17.892815 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Mar 12 23:44:17.894179 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:44:17.894219 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:44:17.895211 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 23:44:17.910977 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 12 23:44:17.911033 kernel: BTRFS info (device sda6): turning on async discard Mar 12 23:44:17.912820 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 23:44:17.917831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:44:17.947381 coreos-metadata[855]: Mar 12 23:44:17.947 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 12 23:44:17.949491 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 23:44:17.950224 coreos-metadata[855]: Mar 12 23:44:17.949 INFO Fetch successful Mar 12 23:44:17.950224 coreos-metadata[855]: Mar 12 23:44:17.949 INFO wrote hostname ci-4459-2-4-n-d21c84289b to /sysroot/etc/hostname Mar 12 23:44:17.953887 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 23:44:17.958759 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Mar 12 23:44:17.964559 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 23:44:17.968447 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 23:44:18.062201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 23:44:18.066276 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 23:44:18.068509 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 23:44:18.084747 kernel: BTRFS info (device sda6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:44:18.103804 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 23:44:18.113610 ignition[969]: INFO : Ignition 2.22.0 Mar 12 23:44:18.113610 ignition[969]: INFO : Stage: mount Mar 12 23:44:18.114677 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:18.114677 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:18.116506 ignition[969]: INFO : mount: mount passed Mar 12 23:44:18.116506 ignition[969]: INFO : Ignition finished successfully Mar 12 23:44:18.116899 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 23:44:18.120268 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 23:44:18.132697 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 23:44:18.147016 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 23:44:18.174771 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (982) Mar 12 23:44:18.176756 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 12 23:44:18.176805 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 23:44:18.181058 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 12 23:44:18.181120 kernel: BTRFS info (device sda6): turning on async discard Mar 12 23:44:18.181130 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 23:44:18.184487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 23:44:18.220621 ignition[999]: INFO : Ignition 2.22.0 Mar 12 23:44:18.222404 ignition[999]: INFO : Stage: files Mar 12 23:44:18.222404 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:18.222404 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:18.222404 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Mar 12 23:44:18.225007 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 23:44:18.225007 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 23:44:18.229722 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 23:44:18.231056 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 23:44:18.232699 unknown[999]: wrote ssh authorized keys file for user: core Mar 12 23:44:18.233887 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 23:44:18.235862 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:44:18.236840 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 23:44:18.332655 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 23:44:18.435192 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 23:44:18.437816 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 23:44:18.437816 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 12 23:44:18.635031 systemd-networkd[819]: eth0: Gained IPv6LL Mar 12 23:44:18.682883 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 23:44:18.766292 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:44:18.767605 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 23:44:18.776563 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 12 23:44:19.024355 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 23:44:19.211401 systemd-networkd[819]: eth1: Gained IPv6LL Mar 12 23:44:19.222348 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 23:44:19.222348 ignition[999]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 23:44:19.225188 ignition[999]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:44:19.227851 ignition[999]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 23:44:19.227851 ignition[999]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 23:44:19.227851 ignition[999]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 12 23:44:19.232865 ignition[999]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 12 23:44:19.232865 ignition[999]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 12 23:44:19.232865 ignition[999]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 12 23:44:19.232865 ignition[999]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 12 23:44:19.232865 ignition[999]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 23:44:19.232865 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:44:19.232865 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 23:44:19.232865 ignition[999]: INFO : files: files passed Mar 12 23:44:19.232865 ignition[999]: INFO : Ignition finished successfully Mar 12 23:44:19.233870 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 23:44:19.237668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 23:44:19.240880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 23:44:19.254637 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 23:44:19.254765 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 23:44:19.263527 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:44:19.264872 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:44:19.264872 initrd-setup-root-after-ignition[1029]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 23:44:19.267953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:44:19.268923 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 23:44:19.272323 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 23:44:19.347814 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 23:44:19.348038 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 23:44:19.350406 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 23:44:19.351937 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 23:44:19.353137 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 23:44:19.354166 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 23:44:19.395725 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:44:19.398056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 23:44:19.424112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:44:19.424855 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:44:19.426488 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 23:44:19.427547 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 23:44:19.427665 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 23:44:19.429065 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 23:44:19.429682 systemd[1]: Stopped target basic.target - Basic System. Mar 12 23:44:19.430773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 23:44:19.431908 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 23:44:19.432927 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 23:44:19.434010 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 23:44:19.435156 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 23:44:19.436299 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 23:44:19.437445 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 23:44:19.438433 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 23:44:19.439521 systemd[1]: Stopped target swap.target - Swaps. Mar 12 23:44:19.440435 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 23:44:19.440554 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 23:44:19.441817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:44:19.442470 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:44:19.443494 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 23:44:19.443579 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:44:19.444616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 23:44:19.444723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 23:44:19.446360 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 23:44:19.446472 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 23:44:19.447641 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 23:44:19.447753 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 23:44:19.448887 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 23:44:19.448975 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 23:44:19.450710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 23:44:19.452235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 23:44:19.452353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:44:19.457024 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 23:44:19.459438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 23:44:19.459569 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:44:19.462813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 23:44:19.462951 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 23:44:19.469768 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 23:44:19.470255 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 23:44:19.485691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 23:44:19.492633 ignition[1053]: INFO : Ignition 2.22.0 Mar 12 23:44:19.493815 ignition[1053]: INFO : Stage: umount Mar 12 23:44:19.493815 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 23:44:19.493815 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 12 23:44:19.497256 ignition[1053]: INFO : umount: umount passed Mar 12 23:44:19.497256 ignition[1053]: INFO : Ignition finished successfully Mar 12 23:44:19.500335 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 23:44:19.500514 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 23:44:19.502497 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 23:44:19.503375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 23:44:19.510063 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 23:44:19.510143 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 23:44:19.512332 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 23:44:19.512404 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 23:44:19.519188 systemd[1]: Stopped target network.target - Network. Mar 12 23:44:19.520149 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 23:44:19.520257 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 23:44:19.521894 systemd[1]: Stopped target paths.target - Path Units. Mar 12 23:44:19.523040 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 23:44:19.527811 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:44:19.530472 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 23:44:19.531705 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 23:44:19.534854 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 23:44:19.534897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 23:44:19.536065 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 23:44:19.536099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 23:44:19.536631 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 23:44:19.536679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 23:44:19.537427 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 23:44:19.537468 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 23:44:19.538323 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 23:44:19.542584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 23:44:19.543987 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 23:44:19.544081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 23:44:19.545159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 23:44:19.545303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 23:44:19.548657 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 23:44:19.548796 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 23:44:19.553846 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 23:44:19.554073 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 23:44:19.554192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 23:44:19.556319 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 23:44:19.557117 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 23:44:19.558089 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 23:44:19.558133 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:44:19.561268 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 23:44:19.561757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 23:44:19.561812 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 23:44:19.563922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 23:44:19.563974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:44:19.567092 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 23:44:19.567135 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 23:44:19.567825 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 23:44:19.567866 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:44:19.571032 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:44:19.573350 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 23:44:19.573421 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:44:19.590945 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 23:44:19.591257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:44:19.595618 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 23:44:19.596420 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 23:44:19.598113 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 23:44:19.598183 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 23:44:19.598815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 23:44:19.598843 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:44:19.600525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 23:44:19.600575 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 23:44:19.602921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 23:44:19.602971 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 23:44:19.604406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 23:44:19.604458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 23:44:19.606822 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 23:44:19.607657 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 23:44:19.607713 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:44:19.613486 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 23:44:19.613557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:44:19.615879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:44:19.615932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:19.621843 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 23:44:19.621977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 23:44:19.622052 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:44:19.627480 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 23:44:19.628770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 23:44:19.630020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 23:44:19.633490 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 23:44:19.673536 systemd[1]: Switching root. Mar 12 23:44:19.712909 systemd-journald[245]: Journal stopped Mar 12 23:44:20.654672 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Mar 12 23:44:20.654753 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 23:44:20.654766 kernel: SELinux: policy capability open_perms=1 Mar 12 23:44:20.654775 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 23:44:20.654784 kernel: SELinux: policy capability always_check_network=0 Mar 12 23:44:20.654796 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 23:44:20.654805 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 23:44:20.654814 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 23:44:20.654823 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 23:44:20.654832 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 23:44:20.654842 kernel: audit: type=1403 audit(1773359059.848:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 23:44:20.654856 systemd[1]: Successfully loaded SELinux policy in 64.808ms. Mar 12 23:44:20.654875 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.171ms. Mar 12 23:44:20.654885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 23:44:20.654896 systemd[1]: Detected virtualization kvm. Mar 12 23:44:20.654905 systemd[1]: Detected architecture arm64. Mar 12 23:44:20.654915 systemd[1]: Detected first boot. Mar 12 23:44:20.654925 systemd[1]: Hostname set to . Mar 12 23:44:20.654937 systemd[1]: Initializing machine ID from VM UUID. Mar 12 23:44:20.654949 zram_generator::config[1097]: No configuration found. Mar 12 23:44:20.654963 kernel: NET: Registered PF_VSOCK protocol family Mar 12 23:44:20.654972 systemd[1]: Populated /etc with preset unit settings. Mar 12 23:44:20.654983 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 23:44:20.654994 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 23:44:20.655004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 23:44:20.655014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 23:44:20.655026 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 23:44:20.655036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 23:44:20.655045 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 23:44:20.655056 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 23:44:20.655066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 23:44:20.655076 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 23:44:20.655088 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 23:44:20.655098 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 23:44:20.655108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 23:44:20.655118 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 23:44:20.655128 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 23:44:20.655137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 23:44:20.655147 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 23:44:20.655158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 23:44:20.655170 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 12 23:44:20.655190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 23:44:20.655202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 23:44:20.655212 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 23:44:20.655222 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 23:44:20.655232 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 23:44:20.655241 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 23:44:20.655254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 23:44:20.655267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 23:44:20.655277 systemd[1]: Reached target slices.target - Slice Units. Mar 12 23:44:20.655286 systemd[1]: Reached target swap.target - Swaps. Mar 12 23:44:20.655296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 23:44:20.655306 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 23:44:20.655316 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 23:44:20.655327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 23:44:20.655352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 23:44:20.655365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 23:44:20.655375 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 23:44:20.655385 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 23:44:20.655395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 23:44:20.655405 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 23:44:20.655414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 23:44:20.655424 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 23:44:20.655434 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 23:44:20.655444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 23:44:20.655455 systemd[1]: Reached target machines.target - Containers. Mar 12 23:44:20.655465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 23:44:20.655475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:44:20.655484 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 23:44:20.655494 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 23:44:20.655504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:44:20.655520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:44:20.655531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:44:20.655541 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 23:44:20.655551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:44:20.655562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 23:44:20.655573 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 23:44:20.655583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 23:44:20.655592 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 23:44:20.655602 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 23:44:20.655612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:44:20.655624 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 23:44:20.655634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 23:44:20.655644 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 23:44:20.655654 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 23:44:20.655666 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 23:44:20.655676 kernel: fuse: init (API version 7.41) Mar 12 23:44:20.655685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 23:44:20.655697 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 23:44:20.655707 systemd[1]: Stopped verity-setup.service. Mar 12 23:44:20.655717 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 23:44:20.655744 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 23:44:20.655756 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 23:44:20.655766 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 23:44:20.655777 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 23:44:20.655793 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 23:44:20.655805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 23:44:20.655816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:44:20.655826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:44:20.655836 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 23:44:20.655849 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 23:44:20.655859 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 23:44:20.655869 kernel: ACPI: bus type drm_connector registered Mar 12 23:44:20.655879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 23:44:20.655888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 23:44:20.655899 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:44:20.655910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:44:20.655920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:44:20.655931 kernel: loop: module loaded Mar 12 23:44:20.655941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:44:20.655950 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:44:20.655985 systemd-journald[1158]: Collecting audit messages is disabled. Mar 12 23:44:20.656008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:44:20.656019 systemd-journald[1158]: Journal started Mar 12 23:44:20.656040 systemd-journald[1158]: Runtime Journal (/run/log/journal/a0c01f1e1a994d39aed685be440699ba) is 8M, max 76.5M, 68.5M free. Mar 12 23:44:20.368542 systemd[1]: Queued start job for default target multi-user.target. Mar 12 23:44:20.393134 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 12 23:44:20.393784 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 23:44:20.657258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 23:44:20.659283 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 23:44:20.661337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 23:44:20.663277 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 23:44:20.670058 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 23:44:20.677151 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 23:44:20.679791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 23:44:20.687258 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 23:44:20.688870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 23:44:20.688914 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 23:44:20.693133 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 23:44:20.696871 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 23:44:20.697601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:44:20.699786 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 23:44:20.704962 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 23:44:20.706049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:44:20.708289 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 23:44:20.709295 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:44:20.718282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:44:20.727000 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 23:44:20.731126 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 23:44:20.736903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 23:44:20.741067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 23:44:20.750759 kernel: loop0: detected capacity change from 0 to 100632 Mar 12 23:44:20.757197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:44:20.775763 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 23:44:20.771194 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 23:44:20.775136 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 23:44:20.776356 systemd-journald[1158]: Time spent on flushing to /var/log/journal/a0c01f1e1a994d39aed685be440699ba is 41.298ms for 1182 entries. Mar 12 23:44:20.776356 systemd-journald[1158]: System Journal (/var/log/journal/a0c01f1e1a994d39aed685be440699ba) is 8M, max 584.8M, 576.8M free. Mar 12 23:44:20.830917 systemd-journald[1158]: Received client request to flush runtime journal. Mar 12 23:44:20.830964 kernel: loop1: detected capacity change from 0 to 197488 Mar 12 23:44:20.783033 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 23:44:20.834042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 23:44:20.838837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 23:44:20.845025 kernel: loop2: detected capacity change from 0 to 119840 Mar 12 23:44:20.850543 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 23:44:20.858046 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 23:44:20.860861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 23:44:20.882786 kernel: loop3: detected capacity change from 0 to 8 Mar 12 23:44:20.892310 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 12 23:44:20.892654 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 12 23:44:20.897166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 23:44:20.901771 kernel: loop4: detected capacity change from 0 to 100632 Mar 12 23:44:20.912777 kernel: loop5: detected capacity change from 0 to 197488 Mar 12 23:44:20.937103 kernel: loop6: detected capacity change from 0 to 119840 Mar 12 23:44:20.948768 kernel: loop7: detected capacity change from 0 to 8 Mar 12 23:44:20.949063 (sd-merge)[1243]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 12 23:44:20.949815 (sd-merge)[1243]: Merged extensions into '/usr'. Mar 12 23:44:20.953642 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 23:44:20.953865 systemd[1]: Reloading... Mar 12 23:44:21.053116 zram_generator::config[1269]: No configuration found. Mar 12 23:44:21.214992 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 23:44:21.215377 systemd[1]: Reloading finished in 261 ms. Mar 12 23:44:21.231804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 23:44:21.240948 systemd[1]: Starting ensure-sysext.service... Mar 12 23:44:21.243131 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 23:44:21.275031 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Mar 12 23:44:21.275057 systemd[1]: Reloading... Mar 12 23:44:21.288857 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 23:44:21.289313 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 23:44:21.289607 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 23:44:21.289902 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 23:44:21.290612 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 23:44:21.291112 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 12 23:44:21.291223 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 12 23:44:21.301425 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:44:21.301438 systemd-tmpfiles[1306]: Skipping /boot Mar 12 23:44:21.317407 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 23:44:21.317422 systemd-tmpfiles[1306]: Skipping /boot Mar 12 23:44:21.347219 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 23:44:21.384765 zram_generator::config[1337]: No configuration found. Mar 12 23:44:21.543061 systemd[1]: Reloading finished in 267 ms. Mar 12 23:44:21.561373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 23:44:21.562451 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 23:44:21.568088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 23:44:21.581001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:44:21.585941 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 23:44:21.587879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 23:44:21.592462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 23:44:21.599966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 23:44:21.602003 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 23:44:21.609792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:44:21.612262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 23:44:21.619057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 23:44:21.625320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 23:44:21.626893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:44:21.627033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:44:21.631482 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 23:44:21.646401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:44:21.646603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:44:21.646749 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:44:21.654781 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 23:44:21.657960 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 23:44:21.661639 systemd[1]: Finished ensure-sysext.service. Mar 12 23:44:21.666675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 23:44:21.670122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 23:44:21.671347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 23:44:21.671399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 23:44:21.677540 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 23:44:21.681266 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 23:44:21.683449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 23:44:21.683832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 23:44:21.686239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 23:44:21.686426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 23:44:21.690008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 23:44:21.692925 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 23:44:21.698085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 23:44:21.700775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 23:44:21.709106 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 23:44:21.712457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 23:44:21.721961 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 23:44:21.724921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 23:44:21.728573 systemd-udevd[1379]: Using default interface naming scheme 'v255'. Mar 12 23:44:21.734217 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 23:44:21.744592 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 23:44:21.750753 augenrules[1420]: No rules Mar 12 23:44:21.753208 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:44:21.757508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:44:21.774703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 23:44:21.778813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 23:44:21.841129 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 23:44:21.842232 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 23:44:21.883454 systemd-resolved[1378]: Positive Trust Anchors: Mar 12 23:44:21.883474 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 23:44:21.883506 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 23:44:21.890217 systemd-resolved[1378]: Using system hostname 'ci-4459-2-4-n-d21c84289b'. Mar 12 23:44:21.892603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 23:44:21.894293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 23:44:21.895522 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 23:44:21.897995 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 23:44:21.898694 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 23:44:21.899685 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 23:44:21.901915 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 23:44:21.902594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 23:44:21.903869 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 23:44:21.903909 systemd[1]: Reached target paths.target - Path Units. Mar 12 23:44:21.905806 systemd[1]: Reached target timers.target - Timer Units. Mar 12 23:44:21.907634 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 23:44:21.909977 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 23:44:21.916065 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 23:44:21.917046 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 23:44:21.918890 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 23:44:21.924191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 23:44:21.927538 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 23:44:21.930632 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 23:44:21.934646 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 23:44:21.937608 systemd[1]: Reached target basic.target - Basic System. Mar 12 23:44:21.938618 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:44:21.938648 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 23:44:21.942843 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 23:44:21.947955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 23:44:21.952073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 23:44:21.955979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 23:44:21.968962 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 23:44:21.970826 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 23:44:21.973005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 23:44:21.977984 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 23:44:21.982291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 23:44:21.990003 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 23:44:21.996585 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 23:44:21.998235 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 23:44:21.999130 jq[1468]: false Mar 12 23:44:22.002191 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 23:44:22.004419 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 23:44:22.010271 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 23:44:22.012627 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 23:44:22.014859 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 23:44:22.015061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 23:44:22.021691 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 12 23:44:22.023006 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 23:44:22.039252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 23:44:22.051791 jq[1482]: true Mar 12 23:44:22.080630 tar[1486]: linux-arm64/LICENSE Mar 12 23:44:22.080630 tar[1486]: linux-arm64/helm Mar 12 23:44:22.082635 update_engine[1480]: I20260312 23:44:22.081595 1480 main.cc:92] Flatcar Update Engine starting Mar 12 23:44:22.086811 coreos-metadata[1462]: Mar 12 23:44:22.086 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 12 23:44:22.086811 coreos-metadata[1462]: Mar 12 23:44:22.086 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Mar 12 23:44:22.096800 dbus-daemon[1465]: [system] SELinux support is enabled Mar 12 23:44:22.096997 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 23:44:22.098196 jq[1497]: true Mar 12 23:44:22.100864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 23:44:22.100917 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 23:44:22.101699 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 23:44:22.101713 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 23:44:22.114134 systemd[1]: Started update-engine.service - Update Engine. Mar 12 23:44:22.114831 systemd-networkd[1432]: lo: Link UP Mar 12 23:44:22.114842 systemd-networkd[1432]: lo: Gained carrier Mar 12 23:44:22.116080 update_engine[1480]: I20260312 23:44:22.114704 1480 update_check_scheduler.cc:74] Next update check in 3m31s Mar 12 23:44:22.118041 systemd-networkd[1432]: Enumeration completed Mar 12 23:44:22.121468 extend-filesystems[1469]: Found /dev/sda6 Mar 12 23:44:22.130089 extend-filesystems[1469]: Found /dev/sda9 Mar 12 23:44:22.134377 extend-filesystems[1469]: Checking size of /dev/sda9 Mar 12 23:44:22.142101 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 23:44:22.145034 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 23:44:22.145968 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 23:44:22.146206 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 23:44:22.148095 systemd[1]: Reached target network.target - Network. Mar 12 23:44:22.152213 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 23:44:22.157546 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 23:44:22.167010 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 23:44:22.178042 extend-filesystems[1469]: Resized partition /dev/sda9 Mar 12 23:44:22.181753 extend-filesystems[1534]: resize2fs 1.47.3 (8-Jul-2025) Mar 12 23:44:22.196112 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 12 23:44:22.238815 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 23:44:22.263476 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:44:22.269041 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 23:44:22.277866 systemd-networkd[1432]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:22.277884 systemd-networkd[1432]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:44:22.280482 systemd[1]: Starting sshkeys.service... Mar 12 23:44:22.282023 systemd-networkd[1432]: eth1: Link UP Mar 12 23:44:22.282715 systemd-networkd[1432]: eth1: Gained carrier Mar 12 23:44:22.283334 systemd-networkd[1432]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:22.326356 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 23:44:22.327956 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:22.327961 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 23:44:22.331742 systemd-networkd[1432]: eth0: Link UP Mar 12 23:44:22.332724 systemd-networkd[1432]: eth0: Gained carrier Mar 12 23:44:22.332782 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 23:44:22.334753 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 12 23:44:22.353697 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 23:44:22.356932 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 23:44:22.362087 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 12 23:44:22.362087 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 12 23:44:22.362087 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 12 23:44:22.364112 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Mar 12 23:44:22.367444 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 23:44:22.367659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 23:44:22.369109 systemd-networkd[1432]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 12 23:44:22.373181 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 12 23:44:22.387892 systemd-networkd[1432]: eth0: DHCPv4 address 159.69.55.216/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 12 23:44:22.389540 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 12 23:44:22.447672 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 23:44:22.506751 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 23:44:22.506888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 12 23:44:22.513085 coreos-metadata[1547]: Mar 12 23:44:22.513 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 12 23:44:22.513992 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 23:44:22.520302 coreos-metadata[1547]: Mar 12 23:44:22.517 INFO Fetch successful Mar 12 23:44:22.521767 unknown[1547]: wrote ssh authorized keys file for user: core Mar 12 23:44:22.583520 systemd-logind[1478]: New seat seat0. Mar 12 23:44:22.590876 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 23:44:22.603413 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Mar 12 23:44:22.605030 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 23:44:22.608354 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 23:44:22.610791 systemd[1]: Finished sshkeys.service. Mar 12 23:44:22.714069 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 12 23:44:22.719030 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 12 23:44:22.850219 containerd[1543]: time="2026-03-12T23:44:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 23:44:22.855210 containerd[1543]: time="2026-03-12T23:44:22.855145640Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.884989760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.36µs" Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885037360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885055440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885237920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885257800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885284680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885352400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885364160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885602440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885618520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885630120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887046 containerd[1543]: time="2026-03-12T23:44:22.885638760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 23:44:22.887335 containerd[1543]: time="2026-03-12T23:44:22.885703160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 23:44:22.890755 containerd[1543]: time="2026-03-12T23:44:22.890309200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:44:22.890755 containerd[1543]: time="2026-03-12T23:44:22.890370240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 23:44:22.890755 containerd[1543]: time="2026-03-12T23:44:22.890382240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 23:44:22.890755 containerd[1543]: time="2026-03-12T23:44:22.890421640Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 23:44:22.890755 containerd[1543]: time="2026-03-12T23:44:22.890663640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 23:44:22.891796 containerd[1543]: time="2026-03-12T23:44:22.891763720Z" level=info msg="metadata content store policy set" policy=shared Mar 12 23:44:22.900626 containerd[1543]: time="2026-03-12T23:44:22.900577080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.900801400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.900942480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.900962400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.900975200Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.900990160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901003720Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901017200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901032920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901043880Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901054200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901066640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901270600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901295840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 23:44:22.901652 containerd[1543]: time="2026-03-12T23:44:22.901312760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901326640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901337920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901349080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901360640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901371720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901384160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901401920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901413280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901598400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 23:44:22.901961 containerd[1543]: time="2026-03-12T23:44:22.901617840Z" level=info msg="Start snapshots syncer" Mar 12 23:44:22.904674 containerd[1543]: time="2026-03-12T23:44:22.903633720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 23:44:22.905114 containerd[1543]: time="2026-03-12T23:44:22.905070480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.906782240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.906923600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.907112600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.907201680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.907216840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 23:44:22.907661 containerd[1543]: time="2026-03-12T23:44:22.907231600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 23:44:22.907899 containerd[1543]: time="2026-03-12T23:44:22.907876440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 23:44:22.907969 containerd[1543]: time="2026-03-12T23:44:22.907957280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 23:44:22.908030 containerd[1543]: time="2026-03-12T23:44:22.908008440Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 23:44:22.908107 containerd[1543]: time="2026-03-12T23:44:22.908086600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 23:44:22.908178 containerd[1543]: time="2026-03-12T23:44:22.908159800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 23:44:22.908235 containerd[1543]: time="2026-03-12T23:44:22.908223120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.909821200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.909922640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.909967560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.909981200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.909990120Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910000360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910012880Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910106320Z" level=info msg="runtime interface created" Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910113840Z" level=info msg="created NRI interface" Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910123840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910154240Z" level=info msg="Connect containerd service" Mar 12 23:44:22.911738 containerd[1543]: time="2026-03-12T23:44:22.910194200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 23:44:22.915184 containerd[1543]: time="2026-03-12T23:44:22.915025840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 23:44:22.986234 tar[1486]: linux-arm64/README.md Mar 12 23:44:23.006939 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 23:44:23.031708 containerd[1543]: time="2026-03-12T23:44:23.031646320Z" level=info msg="Start subscribing containerd event" Mar 12 23:44:23.031708 containerd[1543]: time="2026-03-12T23:44:23.031725120Z" level=info msg="Start recovering state" Mar 12 23:44:23.031866 containerd[1543]: time="2026-03-12T23:44:23.031833800Z" level=info msg="Start event monitor" Mar 12 23:44:23.031866 containerd[1543]: time="2026-03-12T23:44:23.031848480Z" level=info msg="Start cni network conf syncer for default" Mar 12 23:44:23.031866 containerd[1543]: time="2026-03-12T23:44:23.031856080Z" level=info msg="Start streaming server" Mar 12 23:44:23.031926 containerd[1543]: time="2026-03-12T23:44:23.031875880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 23:44:23.031926 containerd[1543]: time="2026-03-12T23:44:23.031885440Z" level=info msg="runtime interface starting up..." Mar 12 23:44:23.031926 containerd[1543]: time="2026-03-12T23:44:23.031890920Z" level=info msg="starting plugins..." Mar 12 23:44:23.031926 containerd[1543]: time="2026-03-12T23:44:23.031906920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 23:44:23.032832 containerd[1543]: time="2026-03-12T23:44:23.032280360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 23:44:23.032832 containerd[1543]: time="2026-03-12T23:44:23.032334800Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 23:44:23.032832 containerd[1543]: time="2026-03-12T23:44:23.032808360Z" level=info msg="containerd successfully booted in 0.183255s" Mar 12 23:44:23.043529 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 23:44:23.071085 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 23:44:23.080833 coreos-metadata[1462]: Mar 12 23:44:23.080 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Mar 12 23:44:23.084346 coreos-metadata[1462]: Mar 12 23:44:23.083 INFO Fetch successful Mar 12 23:44:23.084346 coreos-metadata[1462]: Mar 12 23:44:23.084 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 12 23:44:23.084868 coreos-metadata[1462]: Mar 12 23:44:23.084 INFO Fetch successful Mar 12 23:44:23.087762 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 12 23:44:23.087843 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 12 23:44:23.087856 kernel: [drm] features: -context_init Mar 12 23:44:23.096404 kernel: [drm] number of scanouts: 1 Mar 12 23:44:23.096498 kernel: [drm] number of cap sets: 0 Mar 12 23:44:23.101771 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Mar 12 23:44:23.108080 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Mar 12 23:44:23.124926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:44:23.126517 systemd-logind[1478]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 12 23:44:23.139756 kernel: Console: switching to colour frame buffer device 160x50 Mar 12 23:44:23.154783 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 12 23:44:23.158981 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 23:44:23.196067 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 23:44:23.200490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 23:44:23.207061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:23.211985 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 23:44:23.220221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 23:44:23.222305 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 23:44:23.222661 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 23:44:23.230160 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 23:44:23.239406 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 23:44:23.241154 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 23:44:23.258922 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 23:44:23.262227 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 23:44:23.266958 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 12 23:44:23.267786 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 23:44:23.288066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 23:44:23.691395 systemd-networkd[1432]: eth1: Gained IPv6LL Mar 12 23:44:23.695947 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 12 23:44:23.700896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 23:44:23.703383 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 23:44:23.706620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:44:23.711078 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 23:44:23.743815 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 23:44:24.395315 systemd-networkd[1432]: eth0: Gained IPv6LL Mar 12 23:44:24.396327 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Mar 12 23:44:24.479544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:44:24.481322 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 23:44:24.483848 systemd[1]: Startup finished in 2.351s (kernel) + 5.228s (initrd) + 4.699s (userspace) = 12.280s. Mar 12 23:44:24.502385 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:44:24.916222 kubelet[1658]: E0312 23:44:24.916162 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:44:24.919138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:44:24.919376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:44:24.920394 systemd[1]: kubelet.service: Consumed 797ms CPU time, 246.8M memory peak. Mar 12 23:44:35.169968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 23:44:35.171941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:44:35.343070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:44:35.353762 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:44:35.405398 kubelet[1677]: E0312 23:44:35.405349 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:44:35.408655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:44:35.408917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:44:35.409800 systemd[1]: kubelet.service: Consumed 178ms CPU time, 107.4M memory peak. Mar 12 23:44:45.660217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 23:44:45.664245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:44:45.813852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:44:45.825337 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:44:45.865229 kubelet[1693]: E0312 23:44:45.865171 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:44:45.868807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:44:45.869039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:44:45.870895 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107M memory peak. Mar 12 23:44:54.765403 systemd-timesyncd[1396]: Contacted time server 88.99.86.9:123 (2.flatcar.pool.ntp.org). Mar 12 23:44:54.765509 systemd-timesyncd[1396]: Initial clock synchronization to Thu 2026-03-12 23:44:55.148730 UTC. Mar 12 23:44:56.120420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 23:44:56.124100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:44:56.286971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:44:56.298409 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:44:56.341524 kubelet[1708]: E0312 23:44:56.341453 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:44:56.344673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:44:56.345121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:44:56.345733 systemd[1]: kubelet.service: Consumed 166ms CPU time, 107.2M memory peak. Mar 12 23:45:04.158023 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 23:45:04.163203 systemd[1]: Started sshd@0-159.69.55.216:22-20.161.92.111:43066.service - OpenSSH per-connection server daemon (20.161.92.111:43066). Mar 12 23:45:04.731469 sshd[1716]: Accepted publickey for core from 20.161.92.111 port 43066 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:04.734048 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:04.742769 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 23:45:04.745185 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 23:45:04.758103 systemd-logind[1478]: New session 1 of user core. Mar 12 23:45:04.770485 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 23:45:04.775098 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 23:45:04.790803 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 23:45:04.794963 systemd-logind[1478]: New session c1 of user core. Mar 12 23:45:04.942247 systemd[1721]: Queued start job for default target default.target. Mar 12 23:45:04.962640 systemd[1721]: Created slice app.slice - User Application Slice. Mar 12 23:45:04.963023 systemd[1721]: Reached target paths.target - Paths. Mar 12 23:45:04.963241 systemd[1721]: Reached target timers.target - Timers. Mar 12 23:45:04.965239 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 23:45:04.998354 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 23:45:04.998730 systemd[1721]: Reached target sockets.target - Sockets. Mar 12 23:45:04.998804 systemd[1721]: Reached target basic.target - Basic System. Mar 12 23:45:04.998839 systemd[1721]: Reached target default.target - Main User Target. Mar 12 23:45:04.998871 systemd[1721]: Startup finished in 193ms. Mar 12 23:45:04.998959 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 23:45:05.008145 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 23:45:05.314725 systemd[1]: Started sshd@1-159.69.55.216:22-20.161.92.111:43068.service - OpenSSH per-connection server daemon (20.161.92.111:43068). Mar 12 23:45:05.848792 sshd[1732]: Accepted publickey for core from 20.161.92.111 port 43068 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:05.851480 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:05.857523 systemd-logind[1478]: New session 2 of user core. Mar 12 23:45:05.866176 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 23:45:06.140130 sshd[1735]: Connection closed by 20.161.92.111 port 43068 Mar 12 23:45:06.140637 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:06.145358 systemd[1]: sshd@1-159.69.55.216:22-20.161.92.111:43068.service: Deactivated successfully. Mar 12 23:45:06.147426 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 23:45:06.149444 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Mar 12 23:45:06.152198 systemd-logind[1478]: Removed session 2. Mar 12 23:45:06.248989 systemd[1]: Started sshd@2-159.69.55.216:22-20.161.92.111:43074.service - OpenSSH per-connection server daemon (20.161.92.111:43074). Mar 12 23:45:06.595814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 23:45:06.599088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:06.761618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:06.776360 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:45:06.786799 sshd[1741]: Accepted publickey for core from 20.161.92.111 port 43074 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:06.788172 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:06.794632 systemd-logind[1478]: New session 3 of user core. Mar 12 23:45:06.801020 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 23:45:06.831390 kubelet[1752]: E0312 23:45:06.831296 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:45:06.834501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:45:06.834668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:45:06.835384 systemd[1]: kubelet.service: Consumed 171ms CPU time, 107.2M memory peak. Mar 12 23:45:07.071335 sshd[1758]: Connection closed by 20.161.92.111 port 43074 Mar 12 23:45:07.072095 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:07.077328 systemd[1]: sshd@2-159.69.55.216:22-20.161.92.111:43074.service: Deactivated successfully. Mar 12 23:45:07.080432 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 23:45:07.083560 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Mar 12 23:45:07.085077 systemd-logind[1478]: Removed session 3. Mar 12 23:45:07.178759 systemd[1]: Started sshd@3-159.69.55.216:22-20.161.92.111:43084.service - OpenSSH per-connection server daemon (20.161.92.111:43084). Mar 12 23:45:07.204471 update_engine[1480]: I20260312 23:45:07.203833 1480 update_attempter.cc:509] Updating boot flags... Mar 12 23:45:07.721841 sshd[1765]: Accepted publickey for core from 20.161.92.111 port 43084 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:07.723951 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:07.729158 systemd-logind[1478]: New session 4 of user core. Mar 12 23:45:07.740651 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 23:45:08.011907 sshd[1784]: Connection closed by 20.161.92.111 port 43084 Mar 12 23:45:08.012455 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:08.017647 systemd[1]: sshd@3-159.69.55.216:22-20.161.92.111:43084.service: Deactivated successfully. Mar 12 23:45:08.020013 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 23:45:08.021534 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Mar 12 23:45:08.022945 systemd-logind[1478]: Removed session 4. Mar 12 23:45:08.124291 systemd[1]: Started sshd@4-159.69.55.216:22-20.161.92.111:43096.service - OpenSSH per-connection server daemon (20.161.92.111:43096). Mar 12 23:45:08.662644 sshd[1790]: Accepted publickey for core from 20.161.92.111 port 43096 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:08.664605 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:08.671291 systemd-logind[1478]: New session 5 of user core. Mar 12 23:45:08.680167 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 23:45:08.874017 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 23:45:08.874307 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:08.889347 sudo[1794]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:08.986908 sshd[1793]: Connection closed by 20.161.92.111 port 43096 Mar 12 23:45:08.987202 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:08.992781 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Mar 12 23:45:08.994145 systemd[1]: sshd@4-159.69.55.216:22-20.161.92.111:43096.service: Deactivated successfully. Mar 12 23:45:08.996885 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 23:45:08.999136 systemd-logind[1478]: Removed session 5. Mar 12 23:45:09.092866 systemd[1]: Started sshd@5-159.69.55.216:22-20.161.92.111:43098.service - OpenSSH per-connection server daemon (20.161.92.111:43098). Mar 12 23:45:09.624872 sshd[1800]: Accepted publickey for core from 20.161.92.111 port 43098 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:09.627631 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:09.634453 systemd-logind[1478]: New session 6 of user core. Mar 12 23:45:09.644490 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 23:45:09.820518 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 23:45:09.821235 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:09.828609 sudo[1805]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:09.835665 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 23:45:09.836198 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:09.849062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 23:45:09.897034 augenrules[1827]: No rules Mar 12 23:45:09.897855 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 23:45:09.898526 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 23:45:09.899953 sudo[1804]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:09.994970 sshd[1803]: Connection closed by 20.161.92.111 port 43098 Mar 12 23:45:09.995924 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:10.003511 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Mar 12 23:45:10.004608 systemd[1]: sshd@5-159.69.55.216:22-20.161.92.111:43098.service: Deactivated successfully. Mar 12 23:45:10.006938 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 23:45:10.008723 systemd-logind[1478]: Removed session 6. Mar 12 23:45:10.102999 systemd[1]: Started sshd@6-159.69.55.216:22-20.161.92.111:35980.service - OpenSSH per-connection server daemon (20.161.92.111:35980). Mar 12 23:45:10.638675 sshd[1836]: Accepted publickey for core from 20.161.92.111 port 35980 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:45:10.640900 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:45:10.649035 systemd-logind[1478]: New session 7 of user core. Mar 12 23:45:10.651944 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 23:45:10.835032 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 23:45:10.835336 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 23:45:11.163033 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 23:45:11.177434 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 23:45:11.409335 dockerd[1857]: time="2026-03-12T23:45:11.409279878Z" level=info msg="Starting up" Mar 12 23:45:11.410968 dockerd[1857]: time="2026-03-12T23:45:11.410677906Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 23:45:11.425408 dockerd[1857]: time="2026-03-12T23:45:11.425298233Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 23:45:11.465400 dockerd[1857]: time="2026-03-12T23:45:11.465340441Z" level=info msg="Loading containers: start." Mar 12 23:45:11.475831 kernel: Initializing XFRM netlink socket Mar 12 23:45:11.741147 systemd-networkd[1432]: docker0: Link UP Mar 12 23:45:11.746773 dockerd[1857]: time="2026-03-12T23:45:11.746262422Z" level=info msg="Loading containers: done." Mar 12 23:45:11.761722 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1052851691-merged.mount: Deactivated successfully. Mar 12 23:45:11.763388 dockerd[1857]: time="2026-03-12T23:45:11.763345369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 23:45:11.763456 dockerd[1857]: time="2026-03-12T23:45:11.763441832Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 23:45:11.763555 dockerd[1857]: time="2026-03-12T23:45:11.763531336Z" level=info msg="Initializing buildkit" Mar 12 23:45:11.794611 dockerd[1857]: time="2026-03-12T23:45:11.794548227Z" level=info msg="Completed buildkit initialization" Mar 12 23:45:11.803912 dockerd[1857]: time="2026-03-12T23:45:11.803812609Z" level=info msg="Daemon has completed initialization" Mar 12 23:45:11.804786 dockerd[1857]: time="2026-03-12T23:45:11.804221470Z" level=info msg="API listen on /run/docker.sock" Mar 12 23:45:11.804655 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 23:45:12.249496 containerd[1543]: time="2026-03-12T23:45:12.249300496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 12 23:45:12.859959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834661563.mount: Deactivated successfully. Mar 12 23:45:13.819786 containerd[1543]: time="2026-03-12T23:45:13.819302380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:13.820631 containerd[1543]: time="2026-03-12T23:45:13.820592346Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701894" Mar 12 23:45:13.823138 containerd[1543]: time="2026-03-12T23:45:13.821597122Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:13.824289 containerd[1543]: time="2026-03-12T23:45:13.824249767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:13.825301 containerd[1543]: time="2026-03-12T23:45:13.825267599Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 1.575923131s" Mar 12 23:45:13.825441 containerd[1543]: time="2026-03-12T23:45:13.825421061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 12 23:45:13.826616 containerd[1543]: time="2026-03-12T23:45:13.826572751Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 12 23:45:15.002771 containerd[1543]: time="2026-03-12T23:45:15.002195093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.003747 containerd[1543]: time="2026-03-12T23:45:15.003708698Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063059" Mar 12 23:45:15.005970 containerd[1543]: time="2026-03-12T23:45:15.005943168Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.010749 containerd[1543]: time="2026-03-12T23:45:15.010080048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.014409 containerd[1543]: time="2026-03-12T23:45:15.013716473Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.187010999s" Mar 12 23:45:15.014409 containerd[1543]: time="2026-03-12T23:45:15.013787107Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 12 23:45:15.015326 containerd[1543]: time="2026-03-12T23:45:15.015219523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 12 23:45:15.933289 containerd[1543]: time="2026-03-12T23:45:15.933067900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.935426 containerd[1543]: time="2026-03-12T23:45:15.935203483Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797921" Mar 12 23:45:15.936589 containerd[1543]: time="2026-03-12T23:45:15.936557079Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.940671 containerd[1543]: time="2026-03-12T23:45:15.940626014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:15.942699 containerd[1543]: time="2026-03-12T23:45:15.941742546Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 926.320875ms" Mar 12 23:45:15.942699 containerd[1543]: time="2026-03-12T23:45:15.941776740Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 12 23:45:15.943078 containerd[1543]: time="2026-03-12T23:45:15.943012707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 12 23:45:16.862056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 12 23:45:16.864589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:16.910194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732074340.mount: Deactivated successfully. Mar 12 23:45:17.028647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:17.039232 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 23:45:17.090739 kubelet[2150]: E0312 23:45:17.090672 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 23:45:17.094475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 23:45:17.094612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 23:45:17.095577 systemd[1]: kubelet.service: Consumed 158ms CPU time, 106.2M memory peak. Mar 12 23:45:17.212845 containerd[1543]: time="2026-03-12T23:45:17.212558381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:17.214516 containerd[1543]: time="2026-03-12T23:45:17.214474356Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329609" Mar 12 23:45:17.215768 containerd[1543]: time="2026-03-12T23:45:17.215554252Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:17.217831 containerd[1543]: time="2026-03-12T23:45:17.217780653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:17.218461 containerd[1543]: time="2026-03-12T23:45:17.218430019Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.275367593s" Mar 12 23:45:17.218856 containerd[1543]: time="2026-03-12T23:45:17.218542423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 12 23:45:17.219321 containerd[1543]: time="2026-03-12T23:45:17.219289436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 12 23:45:17.818616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716678503.mount: Deactivated successfully. Mar 12 23:45:18.606997 containerd[1543]: time="2026-03-12T23:45:18.606797491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:18.609428 containerd[1543]: time="2026-03-12T23:45:18.608698347Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Mar 12 23:45:18.611347 containerd[1543]: time="2026-03-12T23:45:18.611289975Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:18.616790 containerd[1543]: time="2026-03-12T23:45:18.616719015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:18.618364 containerd[1543]: time="2026-03-12T23:45:18.618313832Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.398916405s" Mar 12 23:45:18.618364 containerd[1543]: time="2026-03-12T23:45:18.618356767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 12 23:45:18.618938 containerd[1543]: time="2026-03-12T23:45:18.618851344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 23:45:19.159590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276397535.mount: Deactivated successfully. Mar 12 23:45:19.172308 containerd[1543]: time="2026-03-12T23:45:19.171313652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:19.172843 containerd[1543]: time="2026-03-12T23:45:19.172793244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Mar 12 23:45:19.174353 containerd[1543]: time="2026-03-12T23:45:19.173980188Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:19.176795 containerd[1543]: time="2026-03-12T23:45:19.176748402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:19.177921 containerd[1543]: time="2026-03-12T23:45:19.177883846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 558.996665ms" Mar 12 23:45:19.178053 containerd[1543]: time="2026-03-12T23:45:19.178034498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 12 23:45:19.178675 containerd[1543]: time="2026-03-12T23:45:19.178633982Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 12 23:45:19.773481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202857595.mount: Deactivated successfully. Mar 12 23:45:20.413424 containerd[1543]: time="2026-03-12T23:45:20.413280014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:20.414900 containerd[1543]: time="2026-03-12T23:45:20.414803562Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738239" Mar 12 23:45:20.415879 containerd[1543]: time="2026-03-12T23:45:20.415826700Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:20.419542 containerd[1543]: time="2026-03-12T23:45:20.419471452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:20.421511 containerd[1543]: time="2026-03-12T23:45:20.421406700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.242723265s" Mar 12 23:45:20.421511 containerd[1543]: time="2026-03-12T23:45:20.421455904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 12 23:45:24.124531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:24.124774 systemd[1]: kubelet.service: Consumed 158ms CPU time, 106.2M memory peak. Mar 12 23:45:24.127196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:24.160930 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Mar 12 23:45:24.160947 systemd[1]: Reloading... Mar 12 23:45:24.318776 zram_generator::config[2361]: No configuration found. Mar 12 23:45:24.489357 systemd[1]: Reloading finished in 328 ms. Mar 12 23:45:24.550993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 23:45:24.551156 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 23:45:24.551493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:24.551559 systemd[1]: kubelet.service: Consumed 111ms CPU time, 94.8M memory peak. Mar 12 23:45:24.553528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:24.715244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:24.729317 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:45:24.775498 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:45:25.170568 kubelet[2397]: I0312 23:45:25.170481 2397 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 23:45:25.170568 kubelet[2397]: I0312 23:45:25.170556 2397 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:45:25.172102 kubelet[2397]: I0312 23:45:25.172066 2397 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 23:45:25.172102 kubelet[2397]: I0312 23:45:25.172094 2397 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:45:25.172453 kubelet[2397]: I0312 23:45:25.172415 2397 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 23:45:25.179947 kubelet[2397]: I0312 23:45:25.179898 2397 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:45:25.181119 kubelet[2397]: E0312 23:45:25.181069 2397 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://159.69.55.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.55.216:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 23:45:25.184123 kubelet[2397]: I0312 23:45:25.184095 2397 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:45:25.187124 kubelet[2397]: I0312 23:45:25.187097 2397 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 23:45:25.188212 kubelet[2397]: I0312 23:45:25.188144 2397 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:45:25.188456 kubelet[2397]: I0312 23:45:25.188203 2397 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-d21c84289b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:45:25.188456 kubelet[2397]: I0312 23:45:25.188449 2397 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 23:45:25.188590 kubelet[2397]: I0312 23:45:25.188464 2397 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 23:45:25.188590 kubelet[2397]: I0312 23:45:25.188584 2397 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 23:45:25.191497 kubelet[2397]: I0312 23:45:25.191447 2397 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 23:45:25.191791 kubelet[2397]: I0312 23:45:25.191776 2397 kubelet.go:482] "Attempting to sync node with API server" Mar 12 23:45:25.191855 kubelet[2397]: I0312 23:45:25.191803 2397 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:45:25.191855 kubelet[2397]: I0312 23:45:25.191825 2397 kubelet.go:394] "Adding apiserver pod source" Mar 12 23:45:25.191855 kubelet[2397]: I0312 23:45:25.191837 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:45:25.196476 kubelet[2397]: I0312 23:45:25.196437 2397 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:45:25.198040 kubelet[2397]: I0312 23:45:25.198002 2397 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:45:25.198125 kubelet[2397]: I0312 23:45:25.198056 2397 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 23:45:25.198125 kubelet[2397]: W0312 23:45:25.198119 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 23:45:25.202284 kubelet[2397]: I0312 23:45:25.202248 2397 server.go:1257] "Started kubelet" Mar 12 23:45:25.204029 kubelet[2397]: I0312 23:45:25.203953 2397 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:45:25.205836 kubelet[2397]: I0312 23:45:25.205143 2397 server.go:317] "Adding debug handlers to kubelet server" Mar 12 23:45:25.211997 kubelet[2397]: I0312 23:45:25.211919 2397 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:45:25.212320 kubelet[2397]: I0312 23:45:25.212256 2397 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 23:45:25.212586 kubelet[2397]: I0312 23:45:25.212554 2397 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:45:25.214504 kubelet[2397]: E0312 23:45:25.212718 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.55.216:6443/api/v1/namespaces/default/events\": dial tcp 159.69.55.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-d21c84289b.189c3cba8cdfe498 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-d21c84289b,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-d21c84289b,},FirstTimestamp:2026-03-12 23:45:25.202199704 +0000 UTC m=+0.467886796,LastTimestamp:2026-03-12 23:45:25.202199704 +0000 UTC m=+0.467886796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-d21c84289b,}" Mar 12 23:45:25.218565 kubelet[2397]: E0312 23:45:25.218520 2397 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 23:45:25.218870 kubelet[2397]: I0312 23:45:25.218855 2397 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 23:45:25.218961 kubelet[2397]: I0312 23:45:25.218938 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:45:25.219752 kubelet[2397]: I0312 23:45:25.219516 2397 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 23:45:25.221951 kubelet[2397]: I0312 23:45:25.221914 2397 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 23:45:25.222013 kubelet[2397]: I0312 23:45:25.221987 2397 reconciler.go:29] "Reconciler: start to sync state" Mar 12 23:45:25.223999 kubelet[2397]: E0312 23:45:25.223926 2397 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-d21c84289b\" not found" Mar 12 23:45:25.224614 kubelet[2397]: I0312 23:45:25.224577 2397 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:45:25.224702 kubelet[2397]: I0312 23:45:25.224680 2397 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:45:25.225136 kubelet[2397]: E0312 23:45:25.225084 2397 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.55.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-d21c84289b?timeout=10s\": dial tcp 159.69.55.216:6443: connect: connection refused" interval="200ms" Mar 12 23:45:25.226510 kubelet[2397]: I0312 23:45:25.226153 2397 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:45:25.243058 kubelet[2397]: I0312 23:45:25.242998 2397 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 23:45:25.245914 kubelet[2397]: I0312 23:45:25.245875 2397 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 23:45:25.245914 kubelet[2397]: I0312 23:45:25.245911 2397 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 23:45:25.246064 kubelet[2397]: I0312 23:45:25.245935 2397 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 23:45:25.246064 kubelet[2397]: E0312 23:45:25.245981 2397 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:45:25.249467 kubelet[2397]: I0312 23:45:25.248695 2397 cpu_manager.go:225] "Starting" policy="none" Mar 12 23:45:25.249467 kubelet[2397]: I0312 23:45:25.248718 2397 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 23:45:25.249467 kubelet[2397]: I0312 23:45:25.248751 2397 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 23:45:25.250981 kubelet[2397]: I0312 23:45:25.250956 2397 policy_none.go:50] "Start" Mar 12 23:45:25.251048 kubelet[2397]: I0312 23:45:25.250988 2397 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 23:45:25.251048 kubelet[2397]: I0312 23:45:25.251001 2397 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 23:45:25.252879 kubelet[2397]: I0312 23:45:25.252855 2397 policy_none.go:44] "Start" Mar 12 23:45:25.258242 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 23:45:25.273827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 23:45:25.278695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 23:45:25.289650 kubelet[2397]: E0312 23:45:25.289608 2397 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:45:25.290163 kubelet[2397]: I0312 23:45:25.290137 2397 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 23:45:25.290335 kubelet[2397]: I0312 23:45:25.290284 2397 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:45:25.291210 kubelet[2397]: I0312 23:45:25.291175 2397 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 23:45:25.293174 kubelet[2397]: E0312 23:45:25.293099 2397 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:45:25.293297 kubelet[2397]: E0312 23:45:25.293184 2397 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-d21c84289b\" not found" Mar 12 23:45:25.362040 systemd[1]: Created slice kubepods-burstable-pod49afcbf65d236c1fafa8aa8e241c03c3.slice - libcontainer container kubepods-burstable-pod49afcbf65d236c1fafa8aa8e241c03c3.slice. Mar 12 23:45:25.369488 kubelet[2397]: E0312 23:45:25.369415 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.374101 systemd[1]: Created slice kubepods-burstable-poda89591fbd450f593dff60f29b471a873.slice - libcontainer container kubepods-burstable-poda89591fbd450f593dff60f29b471a873.slice. Mar 12 23:45:25.376218 kubelet[2397]: E0312 23:45:25.376187 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.386527 systemd[1]: Created slice kubepods-burstable-pod0b78c1dbe8dad05761bb195c4aa6400e.slice - libcontainer container kubepods-burstable-pod0b78c1dbe8dad05761bb195c4aa6400e.slice. Mar 12 23:45:25.389073 kubelet[2397]: E0312 23:45:25.389037 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.392627 kubelet[2397]: I0312 23:45:25.392603 2397 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.393249 kubelet[2397]: E0312 23:45:25.393214 2397 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://159.69.55.216:6443/api/v1/nodes\": dial tcp 159.69.55.216:6443: connect: connection refused" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.424242 kubelet[2397]: I0312 23:45:25.423808 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.424583 kubelet[2397]: I0312 23:45:25.424171 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.424812 kubelet[2397]: I0312 23:45:25.424722 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425007 kubelet[2397]: I0312 23:45:25.424962 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89591fbd450f593dff60f29b471a873-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-d21c84289b\" (UID: \"a89591fbd450f593dff60f29b471a873\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425507 kubelet[2397]: I0312 23:45:25.425207 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425507 kubelet[2397]: I0312 23:45:25.425253 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425507 kubelet[2397]: I0312 23:45:25.425294 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425507 kubelet[2397]: I0312 23:45:25.425335 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.425507 kubelet[2397]: I0312 23:45:25.425373 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.426471 kubelet[2397]: E0312 23:45:25.426428 2397 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.55.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-d21c84289b?timeout=10s\": dial tcp 159.69.55.216:6443: connect: connection refused" interval="400ms" Mar 12 23:45:25.596497 kubelet[2397]: I0312 23:45:25.596050 2397 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.596679 kubelet[2397]: E0312 23:45:25.596602 2397 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://159.69.55.216:6443/api/v1/nodes\": dial tcp 159.69.55.216:6443: connect: connection refused" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:25.675944 containerd[1543]: time="2026-03-12T23:45:25.675828749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-d21c84289b,Uid:49afcbf65d236c1fafa8aa8e241c03c3,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:25.679197 containerd[1543]: time="2026-03-12T23:45:25.679004078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-d21c84289b,Uid:a89591fbd450f593dff60f29b471a873,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:25.685083 kubelet[2397]: E0312 23:45:25.684967 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.55.216:6443/api/v1/namespaces/default/events\": dial tcp 159.69.55.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-d21c84289b.189c3cba8cdfe498 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-d21c84289b,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-d21c84289b,},FirstTimestamp:2026-03-12 23:45:25.202199704 +0000 UTC m=+0.467886796,LastTimestamp:2026-03-12 23:45:25.202199704 +0000 UTC m=+0.467886796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-d21c84289b,}" Mar 12 23:45:25.693148 containerd[1543]: time="2026-03-12T23:45:25.693098490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-d21c84289b,Uid:0b78c1dbe8dad05761bb195c4aa6400e,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:25.827257 kubelet[2397]: E0312 23:45:25.827140 2397 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.55.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-d21c84289b?timeout=10s\": dial tcp 159.69.55.216:6443: connect: connection refused" interval="800ms" Mar 12 23:45:25.999391 kubelet[2397]: I0312 23:45:25.999265 2397 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:26.000323 kubelet[2397]: E0312 23:45:26.000286 2397 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://159.69.55.216:6443/api/v1/nodes\": dial tcp 159.69.55.216:6443: connect: connection refused" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:26.227573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565358330.mount: Deactivated successfully. Mar 12 23:45:26.238766 containerd[1543]: time="2026-03-12T23:45:26.238379775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:45:26.242358 containerd[1543]: time="2026-03-12T23:45:26.242309494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 12 23:45:26.245694 containerd[1543]: time="2026-03-12T23:45:26.245060794Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:45:26.249745 containerd[1543]: time="2026-03-12T23:45:26.249557225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:45:26.251011 containerd[1543]: time="2026-03-12T23:45:26.250966701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 23:45:26.252826 containerd[1543]: time="2026-03-12T23:45:26.252061617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:45:26.253186 containerd[1543]: time="2026-03-12T23:45:26.252915788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 572.059215ms" Mar 12 23:45:26.254948 containerd[1543]: time="2026-03-12T23:45:26.254656237Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 23:45:26.256126 containerd[1543]: time="2026-03-12T23:45:26.255898385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 23:45:26.259125 containerd[1543]: time="2026-03-12T23:45:26.259075129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.195087ms" Mar 12 23:45:26.261634 containerd[1543]: time="2026-03-12T23:45:26.261164323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 582.671492ms" Mar 12 23:45:26.285852 containerd[1543]: time="2026-03-12T23:45:26.283710730Z" level=info msg="connecting to shim c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326" address="unix:///run/containerd/s/75c233071d902cc08698eb220b2779b3b577438a3811fef9ac89b76cd41b10b7" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:26.304300 containerd[1543]: time="2026-03-12T23:45:26.304160977Z" level=info msg="connecting to shim 461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687" address="unix:///run/containerd/s/db9c64688b55b48db7154231399c572f6e19ff2581cb04500cbf35a41323dda5" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:26.310046 containerd[1543]: time="2026-03-12T23:45:26.309965086Z" level=info msg="connecting to shim da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee" address="unix:///run/containerd/s/f6e2a7ad49d69404472b4cb877ffaa2430d5d9c1c2b1225228210c775e10f206" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:26.338964 systemd[1]: Started cri-containerd-c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326.scope - libcontainer container c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326. Mar 12 23:45:26.344141 systemd[1]: Started cri-containerd-da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee.scope - libcontainer container da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee. Mar 12 23:45:26.351504 systemd[1]: Started cri-containerd-461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687.scope - libcontainer container 461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687. Mar 12 23:45:26.399461 containerd[1543]: time="2026-03-12T23:45:26.399417592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-d21c84289b,Uid:49afcbf65d236c1fafa8aa8e241c03c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee\"" Mar 12 23:45:26.413757 containerd[1543]: time="2026-03-12T23:45:26.413265961Z" level=info msg="CreateContainer within sandbox \"da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 23:45:26.431491 containerd[1543]: time="2026-03-12T23:45:26.431448317Z" level=info msg="Container 03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:26.431776 containerd[1543]: time="2026-03-12T23:45:26.431741581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-d21c84289b,Uid:a89591fbd450f593dff60f29b471a873,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326\"" Mar 12 23:45:26.433013 containerd[1543]: time="2026-03-12T23:45:26.432964354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-d21c84289b,Uid:0b78c1dbe8dad05761bb195c4aa6400e,Namespace:kube-system,Attempt:0,} returns sandbox id \"461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687\"" Mar 12 23:45:26.437717 containerd[1543]: time="2026-03-12T23:45:26.437681874Z" level=info msg="CreateContainer within sandbox \"461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 23:45:26.439327 containerd[1543]: time="2026-03-12T23:45:26.439269366Z" level=info msg="CreateContainer within sandbox \"c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 23:45:26.443014 containerd[1543]: time="2026-03-12T23:45:26.442909263Z" level=info msg="CreateContainer within sandbox \"da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c\"" Mar 12 23:45:26.444063 containerd[1543]: time="2026-03-12T23:45:26.444007502Z" level=info msg="StartContainer for \"03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c\"" Mar 12 23:45:26.445226 containerd[1543]: time="2026-03-12T23:45:26.445194047Z" level=info msg="connecting to shim 03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c" address="unix:///run/containerd/s/f6e2a7ad49d69404472b4cb877ffaa2430d5d9c1c2b1225228210c775e10f206" protocol=ttrpc version=3 Mar 12 23:45:26.453359 containerd[1543]: time="2026-03-12T23:45:26.453264806Z" level=info msg="Container f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:26.458534 containerd[1543]: time="2026-03-12T23:45:26.458434272Z" level=info msg="Container 837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:26.466939 containerd[1543]: time="2026-03-12T23:45:26.466797334Z" level=info msg="CreateContainer within sandbox \"461945fd89fedb71c859145d4812e55b5114a70056a712464d8e71f865fc2687\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb\"" Mar 12 23:45:26.467980 containerd[1543]: time="2026-03-12T23:45:26.467600507Z" level=info msg="StartContainer for \"f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb\"" Mar 12 23:45:26.468961 systemd[1]: Started cri-containerd-03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c.scope - libcontainer container 03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c. Mar 12 23:45:26.470346 containerd[1543]: time="2026-03-12T23:45:26.469849703Z" level=info msg="connecting to shim f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb" address="unix:///run/containerd/s/db9c64688b55b48db7154231399c572f6e19ff2581cb04500cbf35a41323dda5" protocol=ttrpc version=3 Mar 12 23:45:26.470973 containerd[1543]: time="2026-03-12T23:45:26.470941136Z" level=info msg="CreateContainer within sandbox \"c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4\"" Mar 12 23:45:26.471967 containerd[1543]: time="2026-03-12T23:45:26.471937537Z" level=info msg="StartContainer for \"837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4\"" Mar 12 23:45:26.482288 containerd[1543]: time="2026-03-12T23:45:26.482166863Z" level=info msg="connecting to shim 837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4" address="unix:///run/containerd/s/75c233071d902cc08698eb220b2779b3b577438a3811fef9ac89b76cd41b10b7" protocol=ttrpc version=3 Mar 12 23:45:26.499491 systemd[1]: Started cri-containerd-f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb.scope - libcontainer container f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb. Mar 12 23:45:26.512957 systemd[1]: Started cri-containerd-837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4.scope - libcontainer container 837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4. Mar 12 23:45:26.577287 containerd[1543]: time="2026-03-12T23:45:26.577244703Z" level=info msg="StartContainer for \"f2e78a76fe6f2e2eea0c52801eef1d361558ca5c7b1c060941d6512b988badfb\" returns successfully" Mar 12 23:45:26.584036 containerd[1543]: time="2026-03-12T23:45:26.583901743Z" level=info msg="StartContainer for \"03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c\" returns successfully" Mar 12 23:45:26.597085 containerd[1543]: time="2026-03-12T23:45:26.597048536Z" level=info msg="StartContainer for \"837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4\" returns successfully" Mar 12 23:45:26.628448 kubelet[2397]: E0312 23:45:26.628380 2397 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.55.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-d21c84289b?timeout=10s\": dial tcp 159.69.55.216:6443: connect: connection refused" interval="1.6s" Mar 12 23:45:26.804970 kubelet[2397]: I0312 23:45:26.804874 2397 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:27.264529 kubelet[2397]: E0312 23:45:27.264495 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:27.267123 kubelet[2397]: E0312 23:45:27.266922 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:27.271030 kubelet[2397]: E0312 23:45:27.271000 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.284154 kubelet[2397]: E0312 23:45:28.284105 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.284516 kubelet[2397]: E0312 23:45:28.284490 2397 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-d21c84289b\" not found" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.519630 kubelet[2397]: I0312 23:45:28.519579 2397 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.529136 kubelet[2397]: I0312 23:45:28.529071 2397 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.546504 kubelet[2397]: E0312 23:45:28.545699 2397 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.546504 kubelet[2397]: I0312 23:45:28.545750 2397 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.548936 kubelet[2397]: E0312 23:45:28.548888 2397 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-d21c84289b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.548936 kubelet[2397]: I0312 23:45:28.548924 2397 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:28.551356 kubelet[2397]: E0312 23:45:28.551307 2397 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:29.197001 kubelet[2397]: I0312 23:45:29.196940 2397 apiserver.go:52] "Watching apiserver" Mar 12 23:45:29.223121 kubelet[2397]: I0312 23:45:29.223058 2397 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 23:45:30.680117 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-7.scope)... Mar 12 23:45:30.680599 systemd[1]: Reloading... Mar 12 23:45:30.791770 zram_generator::config[2737]: No configuration found. Mar 12 23:45:31.007050 systemd[1]: Reloading finished in 326 ms. Mar 12 23:45:31.041111 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:31.056355 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 23:45:31.056933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:31.056999 systemd[1]: kubelet.service: Consumed 897ms CPU time, 119.4M memory peak. Mar 12 23:45:31.063140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 23:45:31.245186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 23:45:31.257266 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 23:45:31.309519 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 23:45:31.323554 kubelet[2779]: I0312 23:45:31.322572 2779 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 23:45:31.323554 kubelet[2779]: I0312 23:45:31.322790 2779 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 23:45:31.323554 kubelet[2779]: I0312 23:45:31.322831 2779 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 23:45:31.323554 kubelet[2779]: I0312 23:45:31.322838 2779 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 23:45:31.323554 kubelet[2779]: I0312 23:45:31.323222 2779 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 23:45:31.325084 kubelet[2779]: I0312 23:45:31.324998 2779 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 23:45:31.328259 kubelet[2779]: I0312 23:45:31.328170 2779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 23:45:31.334665 kubelet[2779]: I0312 23:45:31.334638 2779 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 23:45:31.341787 kubelet[2779]: I0312 23:45:31.340953 2779 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 23:45:31.341787 kubelet[2779]: I0312 23:45:31.341180 2779 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 23:45:31.341787 kubelet[2779]: I0312 23:45:31.341205 2779 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-d21c84289b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 23:45:31.341787 kubelet[2779]: I0312 23:45:31.341359 2779 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341368 2779 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341391 2779 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341581 2779 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341725 2779 kubelet.go:482] "Attempting to sync node with API server" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341752 2779 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 23:45:31.342063 kubelet[2779]: I0312 23:45:31.341769 2779 kubelet.go:394] "Adding apiserver pod source" Mar 12 23:45:31.342431 kubelet[2779]: I0312 23:45:31.342415 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 23:45:31.346133 kubelet[2779]: I0312 23:45:31.346097 2779 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 23:45:31.348784 kubelet[2779]: I0312 23:45:31.348446 2779 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 23:45:31.349458 kubelet[2779]: I0312 23:45:31.349427 2779 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 23:45:31.358835 kubelet[2779]: I0312 23:45:31.358139 2779 server.go:1257] "Started kubelet" Mar 12 23:45:31.360668 kubelet[2779]: I0312 23:45:31.360637 2779 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 23:45:31.369844 kubelet[2779]: I0312 23:45:31.369772 2779 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 23:45:31.370884 kubelet[2779]: I0312 23:45:31.370859 2779 server.go:317] "Adding debug handlers to kubelet server" Mar 12 23:45:31.379260 kubelet[2779]: I0312 23:45:31.379074 2779 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 23:45:31.379384 kubelet[2779]: I0312 23:45:31.379271 2779 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 23:45:31.382888 kubelet[2779]: I0312 23:45:31.382835 2779 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 23:45:31.385754 kubelet[2779]: I0312 23:45:31.385539 2779 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 23:45:31.389105 kubelet[2779]: I0312 23:45:31.389071 2779 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 23:45:31.389373 kubelet[2779]: E0312 23:45:31.389335 2779 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-d21c84289b\" not found" Mar 12 23:45:31.391391 kubelet[2779]: I0312 23:45:31.391251 2779 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 23:45:31.391391 kubelet[2779]: I0312 23:45:31.391361 2779 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 23:45:31.391510 kubelet[2779]: I0312 23:45:31.391471 2779 reconciler.go:29] "Reconciler: start to sync state" Mar 12 23:45:31.393558 kubelet[2779]: I0312 23:45:31.393208 2779 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 23:45:31.393558 kubelet[2779]: I0312 23:45:31.393232 2779 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 23:45:31.393558 kubelet[2779]: I0312 23:45:31.393253 2779 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 23:45:31.393558 kubelet[2779]: E0312 23:45:31.393293 2779 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 23:45:31.395144 kubelet[2779]: I0312 23:45:31.395109 2779 factory.go:223] Registration of the systemd container factory successfully Mar 12 23:45:31.395267 kubelet[2779]: I0312 23:45:31.395240 2779 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 23:45:31.399632 kubelet[2779]: I0312 23:45:31.399600 2779 factory.go:223] Registration of the containerd container factory successfully Mar 12 23:45:31.463441 kubelet[2779]: I0312 23:45:31.463404 2779 cpu_manager.go:225] "Starting" policy="none" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.463937 2779 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.463970 2779 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464132 2779 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464144 2779 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464163 2779 policy_none.go:50] "Start" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464172 2779 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464182 2779 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464315 2779 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 23:45:31.464465 kubelet[2779]: I0312 23:45:31.464331 2779 policy_none.go:44] "Start" Mar 12 23:45:31.472139 kubelet[2779]: E0312 23:45:31.472093 2779 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 23:45:31.472474 kubelet[2779]: I0312 23:45:31.472431 2779 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 23:45:31.472515 kubelet[2779]: I0312 23:45:31.472464 2779 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 23:45:31.474651 kubelet[2779]: I0312 23:45:31.473719 2779 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 23:45:31.475819 kubelet[2779]: E0312 23:45:31.475546 2779 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 23:45:31.494017 kubelet[2779]: I0312 23:45:31.493978 2779 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.494667 kubelet[2779]: I0312 23:45:31.494287 2779 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.495080 kubelet[2779]: I0312 23:45:31.494389 2779 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.578579 kubelet[2779]: I0312 23:45:31.577028 2779 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.588944 kubelet[2779]: I0312 23:45:31.588529 2779 kubelet_node_status.go:123] "Node was previously registered" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.588944 kubelet[2779]: I0312 23:45:31.588632 2779 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.592695 kubelet[2779]: I0312 23:45:31.592643 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.680183 sudo[2816]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 23:45:31.680508 sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 23:45:31.693171 kubelet[2779]: I0312 23:45:31.693122 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693341 kubelet[2779]: I0312 23:45:31.693192 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693341 kubelet[2779]: I0312 23:45:31.693211 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b78c1dbe8dad05761bb195c4aa6400e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" (UID: \"0b78c1dbe8dad05761bb195c4aa6400e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693341 kubelet[2779]: I0312 23:45:31.693230 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693341 kubelet[2779]: I0312 23:45:31.693245 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693341 kubelet[2779]: I0312 23:45:31.693262 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693518 kubelet[2779]: I0312 23:45:31.693277 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89591fbd450f593dff60f29b471a873-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-d21c84289b\" (UID: \"a89591fbd450f593dff60f29b471a873\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:31.693518 kubelet[2779]: I0312 23:45:31.693293 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49afcbf65d236c1fafa8aa8e241c03c3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-d21c84289b\" (UID: \"49afcbf65d236c1fafa8aa8e241c03c3\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:32.037973 sudo[2816]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:32.344369 kubelet[2779]: I0312 23:45:32.343857 2779 apiserver.go:52] "Watching apiserver" Mar 12 23:45:32.392139 kubelet[2779]: I0312 23:45:32.392045 2779 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 23:45:32.442757 kubelet[2779]: I0312 23:45:32.441545 2779 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:32.453186 kubelet[2779]: E0312 23:45:32.452970 2779 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-d21c84289b\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" Mar 12 23:45:32.486144 kubelet[2779]: I0312 23:45:32.486052 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-d21c84289b" podStartSLOduration=1.486036355 podStartE2EDuration="1.486036355s" podCreationTimestamp="2026-03-12 23:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:32.47203291 +0000 UTC m=+1.209828604" watchObservedRunningTime="2026-03-12 23:45:32.486036355 +0000 UTC m=+1.223832049" Mar 12 23:45:32.506905 kubelet[2779]: I0312 23:45:32.506595 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-d21c84289b" podStartSLOduration=1.506581835 podStartE2EDuration="1.506581835s" podCreationTimestamp="2026-03-12 23:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:32.487024428 +0000 UTC m=+1.224820122" watchObservedRunningTime="2026-03-12 23:45:32.506581835 +0000 UTC m=+1.244377529" Mar 12 23:45:32.592263 kubelet[2779]: I0312 23:45:32.592159 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-d21c84289b" podStartSLOduration=1.592144564 podStartE2EDuration="1.592144564s" podCreationTimestamp="2026-03-12 23:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:32.507509177 +0000 UTC m=+1.245304871" watchObservedRunningTime="2026-03-12 23:45:32.592144564 +0000 UTC m=+1.329940258" Mar 12 23:45:33.788857 sudo[1840]: pam_unix(sudo:session): session closed for user root Mar 12 23:45:33.882445 sshd[1839]: Connection closed by 20.161.92.111 port 35980 Mar 12 23:45:33.883455 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Mar 12 23:45:33.889652 systemd[1]: sshd@6-159.69.55.216:22-20.161.92.111:35980.service: Deactivated successfully. Mar 12 23:45:33.892471 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 23:45:33.893240 systemd[1]: session-7.scope: Consumed 5.998s CPU time, 259.1M memory peak. Mar 12 23:45:33.894897 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Mar 12 23:45:33.896833 systemd-logind[1478]: Removed session 7. Mar 12 23:45:35.884352 kubelet[2779]: I0312 23:45:35.884306 2779 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 23:45:35.885669 containerd[1543]: time="2026-03-12T23:45:35.885584597Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 23:45:35.887812 kubelet[2779]: I0312 23:45:35.885857 2779 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 23:45:36.681083 systemd[1]: Created slice kubepods-besteffort-podf57655d2_087e_4af4_b29c_7de370429ec7.slice - libcontainer container kubepods-besteffort-podf57655d2_087e_4af4_b29c_7de370429ec7.slice. Mar 12 23:45:36.701561 systemd[1]: Created slice kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice - libcontainer container kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice. Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728305 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-hubble-tls\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728357 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f57655d2-087e-4af4-b29c-7de370429ec7-kube-proxy\") pod \"kube-proxy-jrr4t\" (UID: \"f57655d2-087e-4af4-b29c-7de370429ec7\") " pod="kube-system/kube-proxy-jrr4t" Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728375 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f57655d2-087e-4af4-b29c-7de370429ec7-lib-modules\") pod \"kube-proxy-jrr4t\" (UID: \"f57655d2-087e-4af4-b29c-7de370429ec7\") " pod="kube-system/kube-proxy-jrr4t" Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728390 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-run\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728406 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-lib-modules\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.728755 kubelet[2779]: I0312 23:45:36.728421 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-xtables-lock\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728438 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wq2\" (UniqueName: \"kubernetes.io/projected/f57655d2-087e-4af4-b29c-7de370429ec7-kube-api-access-r7wq2\") pod \"kube-proxy-jrr4t\" (UID: \"f57655d2-087e-4af4-b29c-7de370429ec7\") " pod="kube-system/kube-proxy-jrr4t" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728454 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-bpf-maps\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728469 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-hostproc\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728484 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-cgroup\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728497 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f57655d2-087e-4af4-b29c-7de370429ec7-xtables-lock\") pod \"kube-proxy-jrr4t\" (UID: \"f57655d2-087e-4af4-b29c-7de370429ec7\") " pod="kube-system/kube-proxy-jrr4t" Mar 12 23:45:36.729005 kubelet[2779]: I0312 23:45:36.728526 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-etc-cni-netd\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729125 kubelet[2779]: I0312 23:45:36.728543 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44489147-70fe-47c8-82b0-40df60011ce4-clustermesh-secrets\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729125 kubelet[2779]: I0312 23:45:36.728559 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44489147-70fe-47c8-82b0-40df60011ce4-cilium-config-path\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729125 kubelet[2779]: I0312 23:45:36.728573 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-net\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729125 kubelet[2779]: I0312 23:45:36.728588 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw2qq\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729125 kubelet[2779]: I0312 23:45:36.728605 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cni-path\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.729273 kubelet[2779]: I0312 23:45:36.728623 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-kernel\") pod \"cilium-wwgqj\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " pod="kube-system/cilium-wwgqj" Mar 12 23:45:36.852654 kubelet[2779]: E0312 23:45:36.852603 2779 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 12 23:45:36.852654 kubelet[2779]: E0312 23:45:36.852649 2779 projected.go:196] Error preparing data for projected volume kube-api-access-r7wq2 for pod kube-system/kube-proxy-jrr4t: configmap "kube-root-ca.crt" not found Mar 12 23:45:36.853921 kubelet[2779]: E0312 23:45:36.852834 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f57655d2-087e-4af4-b29c-7de370429ec7-kube-api-access-r7wq2 podName:f57655d2-087e-4af4-b29c-7de370429ec7 nodeName:}" failed. No retries permitted until 2026-03-12 23:45:37.352709336 +0000 UTC m=+6.090505030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r7wq2" (UniqueName: "kubernetes.io/projected/f57655d2-087e-4af4-b29c-7de370429ec7-kube-api-access-r7wq2") pod "kube-proxy-jrr4t" (UID: "f57655d2-087e-4af4-b29c-7de370429ec7") : configmap "kube-root-ca.crt" not found Mar 12 23:45:36.861027 kubelet[2779]: E0312 23:45:36.860863 2779 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 12 23:45:36.861027 kubelet[2779]: E0312 23:45:36.860918 2779 projected.go:196] Error preparing data for projected volume kube-api-access-dw2qq for pod kube-system/cilium-wwgqj: configmap "kube-root-ca.crt" not found Mar 12 23:45:36.861027 kubelet[2779]: E0312 23:45:36.860994 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq podName:44489147-70fe-47c8-82b0-40df60011ce4 nodeName:}" failed. No retries permitted until 2026-03-12 23:45:37.360972831 +0000 UTC m=+6.098768485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dw2qq" (UniqueName: "kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq") pod "cilium-wwgqj" (UID: "44489147-70fe-47c8-82b0-40df60011ce4") : configmap "kube-root-ca.crt" not found Mar 12 23:45:37.173316 systemd[1]: Created slice kubepods-besteffort-poda87f2336_862c_4c61_8b69_b274919bad96.slice - libcontainer container kubepods-besteffort-poda87f2336_862c_4c61_8b69_b274919bad96.slice. Mar 12 23:45:37.231422 kubelet[2779]: I0312 23:45:37.231338 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87f2336-862c-4c61-8b69-b274919bad96-cilium-config-path\") pod \"cilium-operator-78cf5644cb-7jpqk\" (UID: \"a87f2336-862c-4c61-8b69-b274919bad96\") " pod="kube-system/cilium-operator-78cf5644cb-7jpqk" Mar 12 23:45:37.232129 kubelet[2779]: I0312 23:45:37.232076 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrtk\" (UniqueName: \"kubernetes.io/projected/a87f2336-862c-4c61-8b69-b274919bad96-kube-api-access-phrtk\") pod \"cilium-operator-78cf5644cb-7jpqk\" (UID: \"a87f2336-862c-4c61-8b69-b274919bad96\") " pod="kube-system/cilium-operator-78cf5644cb-7jpqk" Mar 12 23:45:37.480689 containerd[1543]: time="2026-03-12T23:45:37.480480041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-7jpqk,Uid:a87f2336-862c-4c61-8b69-b274919bad96,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:37.502854 containerd[1543]: time="2026-03-12T23:45:37.502750001Z" level=info msg="connecting to shim a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3" address="unix:///run/containerd/s/a30386efca90d8ad1c49dca6ea2b1572e43678b09c7a9682a9833cfab3b781cf" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:37.532099 systemd[1]: Started cri-containerd-a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3.scope - libcontainer container a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3. Mar 12 23:45:37.573670 containerd[1543]: time="2026-03-12T23:45:37.573628067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-7jpqk,Uid:a87f2336-862c-4c61-8b69-b274919bad96,Namespace:kube-system,Attempt:0,} returns sandbox id \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\"" Mar 12 23:45:37.578113 containerd[1543]: time="2026-03-12T23:45:37.577363435Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 23:45:37.596021 containerd[1543]: time="2026-03-12T23:45:37.595726757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrr4t,Uid:f57655d2-087e-4af4-b29c-7de370429ec7,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:37.607972 containerd[1543]: time="2026-03-12T23:45:37.607928505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwgqj,Uid:44489147-70fe-47c8-82b0-40df60011ce4,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:37.619307 containerd[1543]: time="2026-03-12T23:45:37.619250766Z" level=info msg="connecting to shim e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9" address="unix:///run/containerd/s/71ddd3a804b6c85f99252a0c7737901da39d65f19434e05b81b51cfa7f47e38b" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:37.638359 containerd[1543]: time="2026-03-12T23:45:37.638297809Z" level=info msg="connecting to shim a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:37.645991 systemd[1]: Started cri-containerd-e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9.scope - libcontainer container e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9. Mar 12 23:45:37.675172 systemd[1]: Started cri-containerd-a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc.scope - libcontainer container a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc. Mar 12 23:45:37.692111 containerd[1543]: time="2026-03-12T23:45:37.692069123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrr4t,Uid:f57655d2-087e-4af4-b29c-7de370429ec7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9\"" Mar 12 23:45:37.706031 containerd[1543]: time="2026-03-12T23:45:37.705463640Z" level=info msg="CreateContainer within sandbox \"e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 23:45:37.721432 containerd[1543]: time="2026-03-12T23:45:37.721372579Z" level=info msg="Container 0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:37.728784 containerd[1543]: time="2026-03-12T23:45:37.728693460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwgqj,Uid:44489147-70fe-47c8-82b0-40df60011ce4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\"" Mar 12 23:45:37.734288 containerd[1543]: time="2026-03-12T23:45:37.734144135Z" level=info msg="CreateContainer within sandbox \"e37e0c3d845bb522fd6b90018e3a8071da5783f48a412fd358eba97a9e8205c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83\"" Mar 12 23:45:37.735764 containerd[1543]: time="2026-03-12T23:45:37.735667757Z" level=info msg="StartContainer for \"0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83\"" Mar 12 23:45:37.737590 containerd[1543]: time="2026-03-12T23:45:37.737538964Z" level=info msg="connecting to shim 0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83" address="unix:///run/containerd/s/71ddd3a804b6c85f99252a0c7737901da39d65f19434e05b81b51cfa7f47e38b" protocol=ttrpc version=3 Mar 12 23:45:37.767005 systemd[1]: Started cri-containerd-0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83.scope - libcontainer container 0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83. Mar 12 23:45:37.854647 containerd[1543]: time="2026-03-12T23:45:37.854573793Z" level=info msg="StartContainer for \"0ea81220884bb29c4fa1467919bae041fecda157a59d51902684b71783f24a83\" returns successfully" Mar 12 23:45:38.475765 kubelet[2779]: I0312 23:45:38.475643 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jrr4t" podStartSLOduration=2.475628729 podStartE2EDuration="2.475628729s" podCreationTimestamp="2026-03-12 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:38.475545598 +0000 UTC m=+7.213341292" watchObservedRunningTime="2026-03-12 23:45:38.475628729 +0000 UTC m=+7.213424383" Mar 12 23:45:39.160072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683850278.mount: Deactivated successfully. Mar 12 23:45:39.548316 containerd[1543]: time="2026-03-12T23:45:39.548095369Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:39.551058 containerd[1543]: time="2026-03-12T23:45:39.550646694Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 12 23:45:39.552356 containerd[1543]: time="2026-03-12T23:45:39.552080328Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:39.554366 containerd[1543]: time="2026-03-12T23:45:39.554259116Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.976844608s" Mar 12 23:45:39.554366 containerd[1543]: time="2026-03-12T23:45:39.554297778Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 12 23:45:39.558303 containerd[1543]: time="2026-03-12T23:45:39.558185720Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 23:45:39.565128 containerd[1543]: time="2026-03-12T23:45:39.565088296Z" level=info msg="CreateContainer within sandbox \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 23:45:39.579073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390012006.mount: Deactivated successfully. Mar 12 23:45:39.579222 containerd[1543]: time="2026-03-12T23:45:39.579043936Z" level=info msg="Container e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:39.587743 containerd[1543]: time="2026-03-12T23:45:39.587662030Z" level=info msg="CreateContainer within sandbox \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\"" Mar 12 23:45:39.588767 containerd[1543]: time="2026-03-12T23:45:39.588349470Z" level=info msg="StartContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\"" Mar 12 23:45:39.591866 containerd[1543]: time="2026-03-12T23:45:39.591823211Z" level=info msg="connecting to shim e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca" address="unix:///run/containerd/s/a30386efca90d8ad1c49dca6ea2b1572e43678b09c7a9682a9833cfab3b781cf" protocol=ttrpc version=3 Mar 12 23:45:39.620177 systemd[1]: Started cri-containerd-e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca.scope - libcontainer container e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca. Mar 12 23:45:39.654080 containerd[1543]: time="2026-03-12T23:45:39.653833450Z" level=info msg="StartContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" returns successfully" Mar 12 23:45:42.609226 kubelet[2779]: I0312 23:45:42.609121 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-7jpqk" podStartSLOduration=3.629108355 podStartE2EDuration="5.609106274s" podCreationTimestamp="2026-03-12 23:45:37 +0000 UTC" firstStartedPulling="2026-03-12 23:45:37.576183475 +0000 UTC m=+6.313979169" lastFinishedPulling="2026-03-12 23:45:39.556181394 +0000 UTC m=+8.293977088" observedRunningTime="2026-03-12 23:45:40.507872537 +0000 UTC m=+9.245668191" watchObservedRunningTime="2026-03-12 23:45:42.609106274 +0000 UTC m=+11.346901968" Mar 12 23:45:43.237017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792085476.mount: Deactivated successfully. Mar 12 23:45:44.671526 containerd[1543]: time="2026-03-12T23:45:44.671438036Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:44.672935 containerd[1543]: time="2026-03-12T23:45:44.672853201Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 12 23:45:44.674414 containerd[1543]: time="2026-03-12T23:45:44.674336717Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 23:45:44.676811 containerd[1543]: time="2026-03-12T23:45:44.676262715Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.118021604s" Mar 12 23:45:44.676811 containerd[1543]: time="2026-03-12T23:45:44.676309296Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 12 23:45:44.683259 containerd[1543]: time="2026-03-12T23:45:44.683219845Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 23:45:44.694314 containerd[1543]: time="2026-03-12T23:45:44.694266238Z" level=info msg="Container cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:44.700609 containerd[1543]: time="2026-03-12T23:45:44.700545779Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\"" Mar 12 23:45:44.701934 containerd[1543]: time="2026-03-12T23:45:44.701361951Z" level=info msg="StartContainer for \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\"" Mar 12 23:45:44.702635 containerd[1543]: time="2026-03-12T23:45:44.702595873Z" level=info msg="connecting to shim cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" protocol=ttrpc version=3 Mar 12 23:45:44.730066 systemd[1]: Started cri-containerd-cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb.scope - libcontainer container cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb. Mar 12 23:45:44.764404 containerd[1543]: time="2026-03-12T23:45:44.764304070Z" level=info msg="StartContainer for \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" returns successfully" Mar 12 23:45:44.779860 systemd[1]: cri-containerd-cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb.scope: Deactivated successfully. Mar 12 23:45:44.780778 systemd[1]: cri-containerd-cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb.scope: Consumed 27ms CPU time, 6.5M memory peak, 2.1M written to disk. Mar 12 23:45:44.785395 containerd[1543]: time="2026-03-12T23:45:44.785245412Z" level=info msg="received container exit event container_id:\"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" id:\"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" pid:3250 exited_at:{seconds:1773359144 nanos:784266726}" Mar 12 23:45:44.810145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb-rootfs.mount: Deactivated successfully. Mar 12 23:45:45.498878 containerd[1543]: time="2026-03-12T23:45:45.498796624Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 23:45:45.508037 containerd[1543]: time="2026-03-12T23:45:45.507904905Z" level=info msg="Container 34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:45.515769 containerd[1543]: time="2026-03-12T23:45:45.515699736Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\"" Mar 12 23:45:45.518326 containerd[1543]: time="2026-03-12T23:45:45.518276977Z" level=info msg="StartContainer for \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\"" Mar 12 23:45:45.521028 containerd[1543]: time="2026-03-12T23:45:45.520983794Z" level=info msg="connecting to shim 34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" protocol=ttrpc version=3 Mar 12 23:45:45.539968 systemd[1]: Started cri-containerd-34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c.scope - libcontainer container 34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c. Mar 12 23:45:45.573608 containerd[1543]: time="2026-03-12T23:45:45.573502956Z" level=info msg="StartContainer for \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" returns successfully" Mar 12 23:45:45.588123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 23:45:45.588354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:45.589234 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:45.591249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 23:45:45.599001 systemd[1]: cri-containerd-34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c.scope: Deactivated successfully. Mar 12 23:45:45.602141 containerd[1543]: time="2026-03-12T23:45:45.601686174Z" level=info msg="received container exit event container_id:\"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" id:\"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" pid:3296 exited_at:{seconds:1773359145 nanos:600416062}" Mar 12 23:45:45.632788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 23:45:46.510028 containerd[1543]: time="2026-03-12T23:45:46.509955565Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 23:45:46.535999 containerd[1543]: time="2026-03-12T23:45:46.535395616Z" level=info msg="Container 0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:46.547423 containerd[1543]: time="2026-03-12T23:45:46.547351704Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\"" Mar 12 23:45:46.548779 containerd[1543]: time="2026-03-12T23:45:46.548275728Z" level=info msg="StartContainer for \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\"" Mar 12 23:45:46.550601 containerd[1543]: time="2026-03-12T23:45:46.550559397Z" level=info msg="connecting to shim 0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" protocol=ttrpc version=3 Mar 12 23:45:46.577056 systemd[1]: Started cri-containerd-0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad.scope - libcontainer container 0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad. Mar 12 23:45:46.646949 containerd[1543]: time="2026-03-12T23:45:46.646819114Z" level=info msg="StartContainer for \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" returns successfully" Mar 12 23:45:46.649247 systemd[1]: cri-containerd-0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad.scope: Deactivated successfully. Mar 12 23:45:46.651949 containerd[1543]: time="2026-03-12T23:45:46.651897064Z" level=info msg="received container exit event container_id:\"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" id:\"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" pid:3342 exited_at:{seconds:1773359146 nanos:651508063}" Mar 12 23:45:46.695359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad-rootfs.mount: Deactivated successfully. Mar 12 23:45:47.511258 containerd[1543]: time="2026-03-12T23:45:47.511148007Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 23:45:47.536211 containerd[1543]: time="2026-03-12T23:45:47.536058464Z" level=info msg="Container e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:47.546075 containerd[1543]: time="2026-03-12T23:45:47.546032747Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\"" Mar 12 23:45:47.547411 containerd[1543]: time="2026-03-12T23:45:47.547245269Z" level=info msg="StartContainer for \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\"" Mar 12 23:45:47.548752 containerd[1543]: time="2026-03-12T23:45:47.548683881Z" level=info msg="connecting to shim e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" protocol=ttrpc version=3 Mar 12 23:45:47.573089 systemd[1]: Started cri-containerd-e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143.scope - libcontainer container e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143. Mar 12 23:45:47.609390 systemd[1]: cri-containerd-e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143.scope: Deactivated successfully. Mar 12 23:45:47.613193 containerd[1543]: time="2026-03-12T23:45:47.612723324Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice/cri-containerd-e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143.scope/memory.events\": no such file or directory" Mar 12 23:45:47.614184 containerd[1543]: time="2026-03-12T23:45:47.614123641Z" level=info msg="received container exit event container_id:\"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" id:\"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" pid:3381 exited_at:{seconds:1773359147 nanos:611885952}" Mar 12 23:45:47.626949 containerd[1543]: time="2026-03-12T23:45:47.626894795Z" level=info msg="StartContainer for \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" returns successfully" Mar 12 23:45:47.694963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143-rootfs.mount: Deactivated successfully. Mar 12 23:45:48.527784 containerd[1543]: time="2026-03-12T23:45:48.527460933Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 23:45:48.542477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214398640.mount: Deactivated successfully. Mar 12 23:45:48.546001 containerd[1543]: time="2026-03-12T23:45:48.542833099Z" level=info msg="Container 53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:48.557320 containerd[1543]: time="2026-03-12T23:45:48.557265226Z" level=info msg="CreateContainer within sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\"" Mar 12 23:45:48.560631 containerd[1543]: time="2026-03-12T23:45:48.560417225Z" level=info msg="StartContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\"" Mar 12 23:45:48.563181 containerd[1543]: time="2026-03-12T23:45:48.563138500Z" level=info msg="connecting to shim 53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b" address="unix:///run/containerd/s/49d652625bcd8ce479cdad2637876bbbf581bf23226fe2ac7ec4d1d310bf5748" protocol=ttrpc version=3 Mar 12 23:45:48.591901 systemd[1]: Started cri-containerd-53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b.scope - libcontainer container 53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b. Mar 12 23:45:48.642679 containerd[1543]: time="2026-03-12T23:45:48.642628326Z" level=info msg="StartContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" returns successfully" Mar 12 23:45:48.790862 kubelet[2779]: I0312 23:45:48.790723 2779 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 12 23:45:48.840694 systemd[1]: Created slice kubepods-burstable-pod481f0b52_84a8_4d95_87e3_dff397fedd27.slice - libcontainer container kubepods-burstable-pod481f0b52_84a8_4d95_87e3_dff397fedd27.slice. Mar 12 23:45:48.849854 systemd[1]: Created slice kubepods-burstable-pod28af8afc_9c34_4b23_8494_f841dcbd197b.slice - libcontainer container kubepods-burstable-pod28af8afc_9c34_4b23_8494_f841dcbd197b.slice. Mar 12 23:45:48.915216 kubelet[2779]: I0312 23:45:48.915125 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28af8afc-9c34-4b23-8494-f841dcbd197b-config-volume\") pod \"coredns-7d764666f9-s6j4x\" (UID: \"28af8afc-9c34-4b23-8494-f841dcbd197b\") " pod="kube-system/coredns-7d764666f9-s6j4x" Mar 12 23:45:48.915216 kubelet[2779]: I0312 23:45:48.915176 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq8t\" (UniqueName: \"kubernetes.io/projected/28af8afc-9c34-4b23-8494-f841dcbd197b-kube-api-access-nmq8t\") pod \"coredns-7d764666f9-s6j4x\" (UID: \"28af8afc-9c34-4b23-8494-f841dcbd197b\") " pod="kube-system/coredns-7d764666f9-s6j4x" Mar 12 23:45:48.915216 kubelet[2779]: I0312 23:45:48.915201 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481f0b52-84a8-4d95-87e3-dff397fedd27-config-volume\") pod \"coredns-7d764666f9-47vzx\" (UID: \"481f0b52-84a8-4d95-87e3-dff397fedd27\") " pod="kube-system/coredns-7d764666f9-47vzx" Mar 12 23:45:48.915216 kubelet[2779]: I0312 23:45:48.915215 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjjn\" (UniqueName: \"kubernetes.io/projected/481f0b52-84a8-4d95-87e3-dff397fedd27-kube-api-access-fxjjn\") pod \"coredns-7d764666f9-47vzx\" (UID: \"481f0b52-84a8-4d95-87e3-dff397fedd27\") " pod="kube-system/coredns-7d764666f9-47vzx" Mar 12 23:45:49.149373 containerd[1543]: time="2026-03-12T23:45:49.149027886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-47vzx,Uid:481f0b52-84a8-4d95-87e3-dff397fedd27,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:49.161692 containerd[1543]: time="2026-03-12T23:45:49.161655125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s6j4x,Uid:28af8afc-9c34-4b23-8494-f841dcbd197b,Namespace:kube-system,Attempt:0,}" Mar 12 23:45:49.548664 kubelet[2779]: I0312 23:45:49.548181 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-wwgqj" podStartSLOduration=2.758731268 podStartE2EDuration="13.548166831s" podCreationTimestamp="2026-03-12 23:45:36 +0000 UTC" firstStartedPulling="2026-03-12 23:45:37.731273003 +0000 UTC m=+6.469068697" lastFinishedPulling="2026-03-12 23:45:48.520708566 +0000 UTC m=+17.258504260" observedRunningTime="2026-03-12 23:45:49.546630832 +0000 UTC m=+18.284426526" watchObservedRunningTime="2026-03-12 23:45:49.548166831 +0000 UTC m=+18.285962485" Mar 12 23:45:50.813495 systemd-networkd[1432]: cilium_host: Link UP Mar 12 23:45:50.815030 systemd-networkd[1432]: cilium_net: Link UP Mar 12 23:45:50.815550 systemd-networkd[1432]: cilium_net: Gained carrier Mar 12 23:45:50.816893 systemd-networkd[1432]: cilium_host: Gained carrier Mar 12 23:45:50.929954 systemd-networkd[1432]: cilium_vxlan: Link UP Mar 12 23:45:50.929961 systemd-networkd[1432]: cilium_vxlan: Gained carrier Mar 12 23:45:51.075083 systemd-networkd[1432]: cilium_net: Gained IPv6LL Mar 12 23:45:51.220933 kernel: NET: Registered PF_ALG protocol family Mar 12 23:45:51.434975 systemd-networkd[1432]: cilium_host: Gained IPv6LL Mar 12 23:45:51.956111 systemd-networkd[1432]: lxc_health: Link UP Mar 12 23:45:51.958137 systemd-networkd[1432]: lxc_health: Gained carrier Mar 12 23:45:52.209280 systemd-networkd[1432]: lxc52e3601302d2: Link UP Mar 12 23:45:52.211767 kernel: eth0: renamed from tmp82d9b Mar 12 23:45:52.216741 systemd-networkd[1432]: lxc52e3601302d2: Gained carrier Mar 12 23:45:52.229310 systemd-networkd[1432]: lxc976ff43869ff: Link UP Mar 12 23:45:52.236767 kernel: eth0: renamed from tmp7fec4 Mar 12 23:45:52.239828 systemd-networkd[1432]: lxc976ff43869ff: Gained carrier Mar 12 23:45:52.267648 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Mar 12 23:45:53.228268 systemd-networkd[1432]: lxc52e3601302d2: Gained IPv6LL Mar 12 23:45:53.803183 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 12 23:45:53.995083 systemd-networkd[1432]: lxc976ff43869ff: Gained IPv6LL Mar 12 23:45:56.238615 containerd[1543]: time="2026-03-12T23:45:56.238552857Z" level=info msg="connecting to shim 82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976" address="unix:///run/containerd/s/fcf0a8c7883d8153873df3c760e578804a6122e19effe62f8d0aadf7f59e92b8" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:56.293973 systemd[1]: Started cri-containerd-82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976.scope - libcontainer container 82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976. Mar 12 23:45:56.302050 containerd[1543]: time="2026-03-12T23:45:56.301716953Z" level=info msg="connecting to shim 7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097" address="unix:///run/containerd/s/169c1b411d0f6d2828a26a3199c3e0ab7f86bebf74ddc86c9dd7b1c49549c20e" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:45:56.352942 systemd[1]: Started cri-containerd-7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097.scope - libcontainer container 7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097. Mar 12 23:45:56.385090 containerd[1543]: time="2026-03-12T23:45:56.385021348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-47vzx,Uid:481f0b52-84a8-4d95-87e3-dff397fedd27,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976\"" Mar 12 23:45:56.393594 containerd[1543]: time="2026-03-12T23:45:56.393528184Z" level=info msg="CreateContainer within sandbox \"82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:45:56.414254 containerd[1543]: time="2026-03-12T23:45:56.414208433Z" level=info msg="Container 2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:56.417449 containerd[1543]: time="2026-03-12T23:45:56.417404038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s6j4x,Uid:28af8afc-9c34-4b23-8494-f841dcbd197b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097\"" Mar 12 23:45:56.423465 containerd[1543]: time="2026-03-12T23:45:56.423412462Z" level=info msg="CreateContainer within sandbox \"82d9ba1509312ff87cba8c19ee2ac910c3a2c3b74b0743b101e257951dd3a976\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21\"" Mar 12 23:45:56.423947 containerd[1543]: time="2026-03-12T23:45:56.423648848Z" level=info msg="CreateContainer within sandbox \"7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 23:45:56.424555 containerd[1543]: time="2026-03-12T23:45:56.424378570Z" level=info msg="StartContainer for \"2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21\"" Mar 12 23:45:56.426892 containerd[1543]: time="2026-03-12T23:45:56.426788037Z" level=info msg="connecting to shim 2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21" address="unix:///run/containerd/s/fcf0a8c7883d8153873df3c760e578804a6122e19effe62f8d0aadf7f59e92b8" protocol=ttrpc version=3 Mar 12 23:45:56.436509 containerd[1543]: time="2026-03-12T23:45:56.436460917Z" level=info msg="Container b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:45:56.449321 containerd[1543]: time="2026-03-12T23:45:56.449164355Z" level=info msg="CreateContainer within sandbox \"7fec487e1a55a79036f0df5ece0492acbf5962c0d5b462df7d78693b4856c097\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b\"" Mar 12 23:45:56.450620 containerd[1543]: time="2026-03-12T23:45:56.450082810Z" level=info msg="StartContainer for \"b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b\"" Mar 12 23:45:56.453795 containerd[1543]: time="2026-03-12T23:45:56.453706374Z" level=info msg="connecting to shim b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b" address="unix:///run/containerd/s/169c1b411d0f6d2828a26a3199c3e0ab7f86bebf74ddc86c9dd7b1c49549c20e" protocol=ttrpc version=3 Mar 12 23:45:56.456236 systemd[1]: Started cri-containerd-2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21.scope - libcontainer container 2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21. Mar 12 23:45:56.485508 systemd[1]: Started cri-containerd-b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b.scope - libcontainer container b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b. Mar 12 23:45:56.508375 containerd[1543]: time="2026-03-12T23:45:56.507779071Z" level=info msg="StartContainer for \"2e5d14776699b563042630d9b164717f46e209501070e607fc8b91678f71da21\" returns successfully" Mar 12 23:45:56.541950 containerd[1543]: time="2026-03-12T23:45:56.540808820Z" level=info msg="StartContainer for \"b8c4320f8d4ea769a0755a21936aece27d969b635490070e693396db9045356b\" returns successfully" Mar 12 23:45:56.575704 kubelet[2779]: I0312 23:45:56.575607 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-s6j4x" podStartSLOduration=19.575590814999998 podStartE2EDuration="19.575590815s" podCreationTimestamp="2026-03-12 23:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:56.57532162 +0000 UTC m=+25.313117314" watchObservedRunningTime="2026-03-12 23:45:56.575590815 +0000 UTC m=+25.313386509" Mar 12 23:45:56.596186 kubelet[2779]: I0312 23:45:56.595796 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-47vzx" podStartSLOduration=19.595780407 podStartE2EDuration="19.595780407s" podCreationTimestamp="2026-03-12 23:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:45:56.594665819 +0000 UTC m=+25.332461473" watchObservedRunningTime="2026-03-12 23:45:56.595780407 +0000 UTC m=+25.333576101" Mar 12 23:46:06.776287 kubelet[2779]: I0312 23:46:06.776233 2779 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 23:47:47.712176 systemd[1]: Started sshd@7-159.69.55.216:22-20.161.92.111:57908.service - OpenSSH per-connection server daemon (20.161.92.111:57908). Mar 12 23:47:48.239776 sshd[4111]: Accepted publickey for core from 20.161.92.111 port 57908 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:47:48.241564 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:48.248568 systemd-logind[1478]: New session 8 of user core. Mar 12 23:47:48.257141 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 23:47:48.621507 sshd[4114]: Connection closed by 20.161.92.111 port 57908 Mar 12 23:47:48.622494 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:48.628927 systemd[1]: sshd@7-159.69.55.216:22-20.161.92.111:57908.service: Deactivated successfully. Mar 12 23:47:48.632958 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 23:47:48.634159 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Mar 12 23:47:48.636209 systemd-logind[1478]: Removed session 8. Mar 12 23:47:53.208595 update_engine[1480]: I20260312 23:47:53.208503 1480 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 12 23:47:53.208595 update_engine[1480]: I20260312 23:47:53.208590 1480 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 12 23:47:53.209367 update_engine[1480]: I20260312 23:47:53.209003 1480 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 12 23:47:53.211646 update_engine[1480]: I20260312 23:47:53.211570 1480 omaha_request_params.cc:62] Current group set to stable Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.211871 1480 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.211905 1480 update_attempter.cc:643] Scheduling an action processor start. Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.211943 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.212000 1480 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.212087 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.212095 1480 omaha_request_action.cc:272] Request: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: Mar 12 23:47:53.212144 update_engine[1480]: I20260312 23:47:53.212101 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:47:53.213356 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 12 23:47:53.213886 update_engine[1480]: I20260312 23:47:53.213840 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:47:53.214661 update_engine[1480]: I20260312 23:47:53.214604 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:47:53.215463 update_engine[1480]: E20260312 23:47:53.215312 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:47:53.215463 update_engine[1480]: I20260312 23:47:53.215400 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 12 23:47:53.739029 systemd[1]: Started sshd@8-159.69.55.216:22-20.161.92.111:46350.service - OpenSSH per-connection server daemon (20.161.92.111:46350). Mar 12 23:47:54.260287 sshd[4127]: Accepted publickey for core from 20.161.92.111 port 46350 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:47:54.262607 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:47:54.268673 systemd-logind[1478]: New session 9 of user core. Mar 12 23:47:54.279093 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 23:47:54.632856 sshd[4130]: Connection closed by 20.161.92.111 port 46350 Mar 12 23:47:54.634090 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Mar 12 23:47:54.639316 systemd[1]: sshd@8-159.69.55.216:22-20.161.92.111:46350.service: Deactivated successfully. Mar 12 23:47:54.642447 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 23:47:54.643491 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Mar 12 23:47:54.645715 systemd-logind[1478]: Removed session 9. Mar 12 23:47:59.746201 systemd[1]: Started sshd@9-159.69.55.216:22-20.161.92.111:46354.service - OpenSSH per-connection server daemon (20.161.92.111:46354). Mar 12 23:48:00.297705 sshd[4143]: Accepted publickey for core from 20.161.92.111 port 46354 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:00.299989 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:00.304994 systemd-logind[1478]: New session 10 of user core. Mar 12 23:48:00.314044 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 23:48:00.674773 sshd[4146]: Connection closed by 20.161.92.111 port 46354 Mar 12 23:48:00.675631 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:00.680152 systemd[1]: sshd@9-159.69.55.216:22-20.161.92.111:46354.service: Deactivated successfully. Mar 12 23:48:00.682578 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 23:48:00.683706 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Mar 12 23:48:00.685451 systemd-logind[1478]: Removed session 10. Mar 12 23:48:00.777254 systemd[1]: Started sshd@10-159.69.55.216:22-20.161.92.111:41834.service - OpenSSH per-connection server daemon (20.161.92.111:41834). Mar 12 23:48:01.311340 sshd[4159]: Accepted publickey for core from 20.161.92.111 port 41834 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:01.313970 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:01.320330 systemd-logind[1478]: New session 11 of user core. Mar 12 23:48:01.328008 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 23:48:01.720831 sshd[4162]: Connection closed by 20.161.92.111 port 41834 Mar 12 23:48:01.721423 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:01.730663 systemd[1]: sshd@10-159.69.55.216:22-20.161.92.111:41834.service: Deactivated successfully. Mar 12 23:48:01.734553 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 23:48:01.736374 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Mar 12 23:48:01.738325 systemd-logind[1478]: Removed session 11. Mar 12 23:48:01.827522 systemd[1]: Started sshd@11-159.69.55.216:22-20.161.92.111:41844.service - OpenSSH per-connection server daemon (20.161.92.111:41844). Mar 12 23:48:02.357224 sshd[4171]: Accepted publickey for core from 20.161.92.111 port 41844 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:02.359884 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:02.366011 systemd-logind[1478]: New session 12 of user core. Mar 12 23:48:02.371991 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 23:48:02.729174 sshd[4174]: Connection closed by 20.161.92.111 port 41844 Mar 12 23:48:02.730050 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:02.737668 systemd[1]: sshd@11-159.69.55.216:22-20.161.92.111:41844.service: Deactivated successfully. Mar 12 23:48:02.739858 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 23:48:02.740883 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Mar 12 23:48:02.742549 systemd-logind[1478]: Removed session 12. Mar 12 23:48:03.203992 update_engine[1480]: I20260312 23:48:03.203850 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:03.203992 update_engine[1480]: I20260312 23:48:03.203972 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:03.204623 update_engine[1480]: I20260312 23:48:03.204507 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:03.205119 update_engine[1480]: E20260312 23:48:03.205047 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:03.205216 update_engine[1480]: I20260312 23:48:03.205156 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 12 23:48:07.843373 systemd[1]: Started sshd@12-159.69.55.216:22-20.161.92.111:41852.service - OpenSSH per-connection server daemon (20.161.92.111:41852). Mar 12 23:48:08.366836 sshd[4188]: Accepted publickey for core from 20.161.92.111 port 41852 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:08.368337 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:08.374083 systemd-logind[1478]: New session 13 of user core. Mar 12 23:48:08.381528 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 23:48:08.729983 sshd[4193]: Connection closed by 20.161.92.111 port 41852 Mar 12 23:48:08.731198 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:08.739414 systemd[1]: sshd@12-159.69.55.216:22-20.161.92.111:41852.service: Deactivated successfully. Mar 12 23:48:08.742675 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 23:48:08.744380 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Mar 12 23:48:08.746413 systemd-logind[1478]: Removed session 13. Mar 12 23:48:13.208216 update_engine[1480]: I20260312 23:48:13.208038 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:13.208216 update_engine[1480]: I20260312 23:48:13.208194 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:13.208776 update_engine[1480]: I20260312 23:48:13.208675 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:13.209261 update_engine[1480]: E20260312 23:48:13.209218 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:13.209357 update_engine[1480]: I20260312 23:48:13.209324 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 12 23:48:13.843257 systemd[1]: Started sshd@13-159.69.55.216:22-20.161.92.111:37910.service - OpenSSH per-connection server daemon (20.161.92.111:37910). Mar 12 23:48:14.369828 sshd[4205]: Accepted publickey for core from 20.161.92.111 port 37910 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:14.371451 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:14.376866 systemd-logind[1478]: New session 14 of user core. Mar 12 23:48:14.385060 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 23:48:14.730842 sshd[4208]: Connection closed by 20.161.92.111 port 37910 Mar 12 23:48:14.731789 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:14.738877 systemd[1]: sshd@13-159.69.55.216:22-20.161.92.111:37910.service: Deactivated successfully. Mar 12 23:48:14.743309 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 23:48:14.747789 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Mar 12 23:48:14.749792 systemd-logind[1478]: Removed session 14. Mar 12 23:48:14.845285 systemd[1]: Started sshd@14-159.69.55.216:22-20.161.92.111:37912.service - OpenSSH per-connection server daemon (20.161.92.111:37912). Mar 12 23:48:15.387169 sshd[4219]: Accepted publickey for core from 20.161.92.111 port 37912 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:15.389311 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:15.397175 systemd-logind[1478]: New session 15 of user core. Mar 12 23:48:15.405138 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 23:48:15.806583 sshd[4222]: Connection closed by 20.161.92.111 port 37912 Mar 12 23:48:15.807336 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:15.813874 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Mar 12 23:48:15.814695 systemd[1]: sshd@14-159.69.55.216:22-20.161.92.111:37912.service: Deactivated successfully. Mar 12 23:48:15.823418 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 23:48:15.825620 systemd-logind[1478]: Removed session 15. Mar 12 23:48:15.916646 systemd[1]: Started sshd@15-159.69.55.216:22-20.161.92.111:37922.service - OpenSSH per-connection server daemon (20.161.92.111:37922). Mar 12 23:48:16.464537 sshd[4232]: Accepted publickey for core from 20.161.92.111 port 37922 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:16.467645 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:16.474037 systemd-logind[1478]: New session 16 of user core. Mar 12 23:48:16.483500 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 23:48:17.437810 sshd[4235]: Connection closed by 20.161.92.111 port 37922 Mar 12 23:48:17.438916 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:17.443090 systemd[1]: sshd@15-159.69.55.216:22-20.161.92.111:37922.service: Deactivated successfully. Mar 12 23:48:17.447306 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 23:48:17.448402 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Mar 12 23:48:17.450192 systemd-logind[1478]: Removed session 16. Mar 12 23:48:17.542616 systemd[1]: Started sshd@16-159.69.55.216:22-20.161.92.111:37934.service - OpenSSH per-connection server daemon (20.161.92.111:37934). Mar 12 23:48:18.071406 sshd[4251]: Accepted publickey for core from 20.161.92.111 port 37934 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:18.073285 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:18.079591 systemd-logind[1478]: New session 17 of user core. Mar 12 23:48:18.088077 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 23:48:18.558950 sshd[4254]: Connection closed by 20.161.92.111 port 37934 Mar 12 23:48:18.559499 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:18.566608 systemd[1]: sshd@16-159.69.55.216:22-20.161.92.111:37934.service: Deactivated successfully. Mar 12 23:48:18.570124 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 23:48:18.572386 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Mar 12 23:48:18.576044 systemd-logind[1478]: Removed session 17. Mar 12 23:48:18.673026 systemd[1]: Started sshd@17-159.69.55.216:22-20.161.92.111:37936.service - OpenSSH per-connection server daemon (20.161.92.111:37936). Mar 12 23:48:19.218915 sshd[4266]: Accepted publickey for core from 20.161.92.111 port 37936 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:19.221543 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:19.227405 systemd-logind[1478]: New session 18 of user core. Mar 12 23:48:19.237009 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 23:48:19.592849 sshd[4269]: Connection closed by 20.161.92.111 port 37936 Mar 12 23:48:19.593344 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:19.597113 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Mar 12 23:48:19.598249 systemd[1]: sshd@17-159.69.55.216:22-20.161.92.111:37936.service: Deactivated successfully. Mar 12 23:48:19.601484 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 23:48:19.604769 systemd-logind[1478]: Removed session 18. Mar 12 23:48:23.208236 update_engine[1480]: I20260312 23:48:23.207444 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:23.208236 update_engine[1480]: I20260312 23:48:23.207577 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:23.208236 update_engine[1480]: I20260312 23:48:23.208164 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:23.209866 update_engine[1480]: E20260312 23:48:23.209066 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209183 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209192 1480 omaha_request_action.cc:617] Omaha request response: Mar 12 23:48:23.209866 update_engine[1480]: E20260312 23:48:23.209275 1480 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209291 1480 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209295 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209300 1480 update_attempter.cc:306] Processing Done. Mar 12 23:48:23.209866 update_engine[1480]: E20260312 23:48:23.209313 1480 update_attempter.cc:619] Update failed. Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209317 1480 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209322 1480 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209327 1480 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209399 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209424 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 23:48:23.209866 update_engine[1480]: I20260312 23:48:23.209430 1480 omaha_request_action.cc:272] Request: Mar 12 23:48:23.209866 update_engine[1480]: Mar 12 23:48:23.209866 update_engine[1480]: Mar 12 23:48:23.211062 update_engine[1480]: Mar 12 23:48:23.211062 update_engine[1480]: Mar 12 23:48:23.211062 update_engine[1480]: Mar 12 23:48:23.211062 update_engine[1480]: Mar 12 23:48:23.211062 update_engine[1480]: I20260312 23:48:23.209436 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 23:48:23.211062 update_engine[1480]: I20260312 23:48:23.209506 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 23:48:23.211062 update_engine[1480]: I20260312 23:48:23.209959 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 23:48:23.211457 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 12 23:48:23.211850 update_engine[1480]: E20260312 23:48:23.211227 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211323 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211333 1480 omaha_request_action.cc:617] Omaha request response: Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211344 1480 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211347 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211352 1480 update_attempter.cc:306] Processing Done. Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211359 1480 update_attempter.cc:310] Error event sent. Mar 12 23:48:23.211850 update_engine[1480]: I20260312 23:48:23.211366 1480 update_check_scheduler.cc:74] Next update check in 45m21s Mar 12 23:48:23.212429 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 12 23:48:24.694767 systemd[1]: Started sshd@18-159.69.55.216:22-20.161.92.111:47922.service - OpenSSH per-connection server daemon (20.161.92.111:47922). Mar 12 23:48:25.230345 sshd[4283]: Accepted publickey for core from 20.161.92.111 port 47922 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:25.233551 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:25.239973 systemd-logind[1478]: New session 19 of user core. Mar 12 23:48:25.248082 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 23:48:25.591523 sshd[4286]: Connection closed by 20.161.92.111 port 47922 Mar 12 23:48:25.592177 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:25.601812 systemd[1]: sshd@18-159.69.55.216:22-20.161.92.111:47922.service: Deactivated successfully. Mar 12 23:48:25.604907 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 23:48:25.607857 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Mar 12 23:48:25.610036 systemd-logind[1478]: Removed session 19. Mar 12 23:48:30.704940 systemd[1]: Started sshd@19-159.69.55.216:22-20.161.92.111:42940.service - OpenSSH per-connection server daemon (20.161.92.111:42940). Mar 12 23:48:31.236803 sshd[4299]: Accepted publickey for core from 20.161.92.111 port 42940 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:31.238781 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:31.244811 systemd-logind[1478]: New session 20 of user core. Mar 12 23:48:31.252046 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 23:48:31.599012 sshd[4302]: Connection closed by 20.161.92.111 port 42940 Mar 12 23:48:31.599714 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:31.605218 systemd[1]: sshd@19-159.69.55.216:22-20.161.92.111:42940.service: Deactivated successfully. Mar 12 23:48:31.610263 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 23:48:31.612622 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Mar 12 23:48:31.614567 systemd-logind[1478]: Removed session 20. Mar 12 23:48:31.711082 systemd[1]: Started sshd@20-159.69.55.216:22-20.161.92.111:42944.service - OpenSSH per-connection server daemon (20.161.92.111:42944). Mar 12 23:48:32.248647 sshd[4316]: Accepted publickey for core from 20.161.92.111 port 42944 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:32.250901 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:32.258179 systemd-logind[1478]: New session 21 of user core. Mar 12 23:48:32.263019 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 23:48:34.590701 containerd[1543]: time="2026-03-12T23:48:34.590634993Z" level=info msg="StopContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" with timeout 30 (s)" Mar 12 23:48:34.591769 containerd[1543]: time="2026-03-12T23:48:34.591724142Z" level=info msg="Stop container \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" with signal terminated" Mar 12 23:48:34.619485 containerd[1543]: time="2026-03-12T23:48:34.619437662Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 23:48:34.628633 systemd[1]: cri-containerd-e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca.scope: Deactivated successfully. Mar 12 23:48:34.635023 containerd[1543]: time="2026-03-12T23:48:34.634882240Z" level=info msg="received container exit event container_id:\"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" id:\"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" pid:3188 exited_at:{seconds:1773359314 nanos:632240457}" Mar 12 23:48:34.644479 containerd[1543]: time="2026-03-12T23:48:34.644252014Z" level=info msg="StopContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" with timeout 2 (s)" Mar 12 23:48:34.644992 containerd[1543]: time="2026-03-12T23:48:34.644848433Z" level=info msg="Stop container \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" with signal terminated" Mar 12 23:48:34.657414 systemd-networkd[1432]: lxc_health: Link DOWN Mar 12 23:48:34.657424 systemd-networkd[1432]: lxc_health: Lost carrier Mar 12 23:48:34.679607 systemd[1]: cri-containerd-53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b.scope: Deactivated successfully. Mar 12 23:48:34.680703 systemd[1]: cri-containerd-53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b.scope: Consumed 7.359s CPU time, 124.5M memory peak, 112K read from disk, 12.9M written to disk. Mar 12 23:48:34.697571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca-rootfs.mount: Deactivated successfully. Mar 12 23:48:34.699490 containerd[1543]: time="2026-03-12T23:48:34.699425909Z" level=info msg="received container exit event container_id:\"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" id:\"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" pid:3418 exited_at:{seconds:1773359314 nanos:699175284}" Mar 12 23:48:34.722487 containerd[1543]: time="2026-03-12T23:48:34.722437321Z" level=info msg="StopContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" returns successfully" Mar 12 23:48:34.725209 containerd[1543]: time="2026-03-12T23:48:34.724659503Z" level=info msg="StopPodSandbox for \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\"" Mar 12 23:48:34.725209 containerd[1543]: time="2026-03-12T23:48:34.724764593Z" level=info msg="Container to stop \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.732243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b-rootfs.mount: Deactivated successfully. Mar 12 23:48:34.738124 systemd[1]: cri-containerd-a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3.scope: Deactivated successfully. Mar 12 23:48:34.744594 containerd[1543]: time="2026-03-12T23:48:34.744539283Z" level=info msg="StopContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" returns successfully" Mar 12 23:48:34.745254 containerd[1543]: time="2026-03-12T23:48:34.745148263Z" level=info msg="received sandbox exit event container_id:\"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" id:\"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" exit_status:137 exited_at:{seconds:1773359314 nanos:743187868}" monitor_name=podsandbox Mar 12 23:48:34.745788 containerd[1543]: time="2026-03-12T23:48:34.745741842Z" level=info msg="StopPodSandbox for \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\"" Mar 12 23:48:34.745943 containerd[1543]: time="2026-03-12T23:48:34.745925501Z" level=info msg="Container to stop \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.746207 containerd[1543]: time="2026-03-12T23:48:34.745984707Z" level=info msg="Container to stop \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.746207 containerd[1543]: time="2026-03-12T23:48:34.745997468Z" level=info msg="Container to stop \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.746207 containerd[1543]: time="2026-03-12T23:48:34.746006269Z" level=info msg="Container to stop \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.746207 containerd[1543]: time="2026-03-12T23:48:34.746016350Z" level=info msg="Container to stop \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 23:48:34.754664 systemd[1]: cri-containerd-a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc.scope: Deactivated successfully. Mar 12 23:48:34.758387 containerd[1543]: time="2026-03-12T23:48:34.758278051Z" level=info msg="received sandbox exit event container_id:\"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" id:\"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" exit_status:137 exited_at:{seconds:1773359314 nanos:757449889}" monitor_name=podsandbox Mar 12 23:48:34.776512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3-rootfs.mount: Deactivated successfully. Mar 12 23:48:34.783463 containerd[1543]: time="2026-03-12T23:48:34.783417595Z" level=info msg="shim disconnected" id=a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3 namespace=k8s.io Mar 12 23:48:34.785634 containerd[1543]: time="2026-03-12T23:48:34.783876801Z" level=warning msg="cleaning up after shim disconnected" id=a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3 namespace=k8s.io Mar 12 23:48:34.785449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc-rootfs.mount: Deactivated successfully. Mar 12 23:48:34.786209 containerd[1543]: time="2026-03-12T23:48:34.786181790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 23:48:34.788794 containerd[1543]: time="2026-03-12T23:48:34.788388450Z" level=info msg="shim disconnected" id=a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc namespace=k8s.io Mar 12 23:48:34.788893 containerd[1543]: time="2026-03-12T23:48:34.788806332Z" level=warning msg="cleaning up after shim disconnected" id=a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc namespace=k8s.io Mar 12 23:48:34.788893 containerd[1543]: time="2026-03-12T23:48:34.788850776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 23:48:34.801327 containerd[1543]: time="2026-03-12T23:48:34.801281814Z" level=info msg="received sandbox container exit event sandbox_id:\"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" exit_status:137 exited_at:{seconds:1773359314 nanos:743187868}" monitor_name=criService Mar 12 23:48:34.802128 containerd[1543]: time="2026-03-12T23:48:34.802093615Z" level=info msg="received sandbox container exit event sandbox_id:\"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" exit_status:137 exited_at:{seconds:1773359314 nanos:757449889}" monitor_name=criService Mar 12 23:48:34.803954 containerd[1543]: time="2026-03-12T23:48:34.803846510Z" level=info msg="TearDown network for sandbox \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" successfully" Mar 12 23:48:34.803954 containerd[1543]: time="2026-03-12T23:48:34.803897435Z" level=info msg="StopPodSandbox for \"a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3\" returns successfully" Mar 12 23:48:34.804661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc-shm.mount: Deactivated successfully. Mar 12 23:48:34.805409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a919fe37e2e616a64d28df395d26e6e7876433b2d3241123fd7f756e20a812c3-shm.mount: Deactivated successfully. Mar 12 23:48:34.806175 containerd[1543]: time="2026-03-12T23:48:34.805532518Z" level=info msg="TearDown network for sandbox \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" successfully" Mar 12 23:48:34.806175 containerd[1543]: time="2026-03-12T23:48:34.805556480Z" level=info msg="StopPodSandbox for \"a2f8f0a3e8b36706c3982925e1753d47c7d4a24d53db2511f47414447ed541cc\" returns successfully" Mar 12 23:48:34.902032 kubelet[2779]: I0312 23:48:34.901962 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a87f2336-862c-4c61-8b69-b274919bad96-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87f2336-862c-4c61-8b69-b274919bad96-cilium-config-path\") pod \"a87f2336-862c-4c61-8b69-b274919bad96\" (UID: \"a87f2336-862c-4c61-8b69-b274919bad96\") " Mar 12 23:48:34.903142 kubelet[2779]: I0312 23:48:34.902066 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/a87f2336-862c-4c61-8b69-b274919bad96-kube-api-access-phrtk\" (UniqueName: \"kubernetes.io/projected/a87f2336-862c-4c61-8b69-b274919bad96-kube-api-access-phrtk\") pod \"a87f2336-862c-4c61-8b69-b274919bad96\" (UID: \"a87f2336-862c-4c61-8b69-b274919bad96\") " Mar 12 23:48:34.909069 kubelet[2779]: I0312 23:48:34.909022 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87f2336-862c-4c61-8b69-b274919bad96-kube-api-access-phrtk" pod "a87f2336-862c-4c61-8b69-b274919bad96" (UID: "a87f2336-862c-4c61-8b69-b274919bad96"). InnerVolumeSpecName "kube-api-access-phrtk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 23:48:34.909069 kubelet[2779]: I0312 23:48:34.909140 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87f2336-862c-4c61-8b69-b274919bad96-cilium-config-path" pod "a87f2336-862c-4c61-8b69-b274919bad96" (UID: "a87f2336-862c-4c61-8b69-b274919bad96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:48:35.002759 kubelet[2779]: I0312 23:48:35.002662 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-bpf-maps\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.002913 kubelet[2779]: I0312 23:48:35.002868 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-bpf-maps" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.003151 kubelet[2779]: I0312 23:48:35.003101 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/44489147-70fe-47c8-82b0-40df60011ce4-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44489147-70fe-47c8-82b0-40df60011ce4-cilium-config-path\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003218 kubelet[2779]: I0312 23:48:35.003185 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-hubble-tls\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-hubble-tls\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003671 kubelet[2779]: I0312 23:48:35.003234 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-kernel\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003671 kubelet[2779]: I0312 23:48:35.003288 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-etc-cni-netd\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003671 kubelet[2779]: I0312 23:48:35.003355 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-hostproc\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-hostproc\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003671 kubelet[2779]: I0312 23:48:35.003395 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-run\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-run\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003671 kubelet[2779]: I0312 23:48:35.003430 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-xtables-lock\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003886 kubelet[2779]: I0312 23:48:35.003505 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-net\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003886 kubelet[2779]: I0312 23:48:35.003527 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003886 kubelet[2779]: I0312 23:48:35.003543 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cni-path\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cni-path\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003886 kubelet[2779]: I0312 23:48:35.003563 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-cgroup\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.003886 kubelet[2779]: I0312 23:48:35.003582 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/44489147-70fe-47c8-82b0-40df60011ce4-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44489147-70fe-47c8-82b0-40df60011ce4-clustermesh-secrets\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.004018 kubelet[2779]: I0312 23:48:35.003600 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-lib-modules\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-lib-modules\") pod \"44489147-70fe-47c8-82b0-40df60011ce4\" (UID: \"44489147-70fe-47c8-82b0-40df60011ce4\") " Mar 12 23:48:35.004018 kubelet[2779]: I0312 23:48:35.003641 2779 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87f2336-862c-4c61-8b69-b274919bad96-cilium-config-path\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.004018 kubelet[2779]: I0312 23:48:35.003651 2779 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-bpf-maps\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.004018 kubelet[2779]: I0312 23:48:35.003684 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phrtk\" (UniqueName: \"kubernetes.io/projected/a87f2336-862c-4c61-8b69-b274919bad96-kube-api-access-phrtk\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.004018 kubelet[2779]: I0312 23:48:35.003757 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-lib-modules" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006073 kubelet[2779]: I0312 23:48:35.006032 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44489147-70fe-47c8-82b0-40df60011ce4-cilium-config-path" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 23:48:35.006214 kubelet[2779]: I0312 23:48:35.006196 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-xtables-lock" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006280 kubelet[2779]: I0312 23:48:35.006251 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-hubble-tls" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 23:48:35.006341 kubelet[2779]: I0312 23:48:35.006274 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-net" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006429 kubelet[2779]: I0312 23:48:35.006414 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-kernel" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006514 kubelet[2779]: I0312 23:48:35.006499 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-etc-cni-netd" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006592 kubelet[2779]: I0312 23:48:35.006576 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-hostproc" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006672 kubelet[2779]: I0312 23:48:35.006657 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-run" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006825 kubelet[2779]: I0312 23:48:35.006807 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-cgroup" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.006920 kubelet[2779]: I0312 23:48:35.006905 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cni-path" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 23:48:35.008646 kubelet[2779]: I0312 23:48:35.008578 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "kube-api-access-dw2qq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 23:48:35.010054 kubelet[2779]: I0312 23:48:35.010022 2779 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44489147-70fe-47c8-82b0-40df60011ce4-clustermesh-secrets" pod "44489147-70fe-47c8-82b0-40df60011ce4" (UID: "44489147-70fe-47c8-82b0-40df60011ce4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 23:48:35.026408 kubelet[2779]: I0312 23:48:35.026350 2779 scope.go:122] "RemoveContainer" containerID="53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b" Mar 12 23:48:35.029917 containerd[1543]: time="2026-03-12T23:48:35.029291968Z" level=info msg="RemoveContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\"" Mar 12 23:48:35.037399 systemd[1]: Removed slice kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice - libcontainer container kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice. Mar 12 23:48:35.037509 systemd[1]: kubepods-burstable-pod44489147_70fe_47c8_82b0_40df60011ce4.slice: Consumed 7.466s CPU time, 124.9M memory peak, 112K read from disk, 15M written to disk. Mar 12 23:48:35.042230 containerd[1543]: time="2026-03-12T23:48:35.042186694Z" level=info msg="RemoveContainer for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" returns successfully" Mar 12 23:48:35.043797 systemd[1]: Removed slice kubepods-besteffort-poda87f2336_862c_4c61_8b69_b274919bad96.slice - libcontainer container kubepods-besteffort-poda87f2336_862c_4c61_8b69_b274919bad96.slice. Mar 12 23:48:35.044876 kubelet[2779]: I0312 23:48:35.044847 2779 scope.go:122] "RemoveContainer" containerID="e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143" Mar 12 23:48:35.050967 containerd[1543]: time="2026-03-12T23:48:35.050822075Z" level=info msg="RemoveContainer for \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\"" Mar 12 23:48:35.057362 containerd[1543]: time="2026-03-12T23:48:35.057311362Z" level=info msg="RemoveContainer for \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" returns successfully" Mar 12 23:48:35.058014 kubelet[2779]: I0312 23:48:35.057638 2779 scope.go:122] "RemoveContainer" containerID="0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad" Mar 12 23:48:35.063080 containerd[1543]: time="2026-03-12T23:48:35.062905000Z" level=info msg="RemoveContainer for \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\"" Mar 12 23:48:35.068931 containerd[1543]: time="2026-03-12T23:48:35.068807949Z" level=info msg="RemoveContainer for \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" returns successfully" Mar 12 23:48:35.069429 kubelet[2779]: I0312 23:48:35.069324 2779 scope.go:122] "RemoveContainer" containerID="34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c" Mar 12 23:48:35.071765 containerd[1543]: time="2026-03-12T23:48:35.071690436Z" level=info msg="RemoveContainer for \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\"" Mar 12 23:48:35.076305 containerd[1543]: time="2026-03-12T23:48:35.076269533Z" level=info msg="RemoveContainer for \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" returns successfully" Mar 12 23:48:35.076677 kubelet[2779]: I0312 23:48:35.076617 2779 scope.go:122] "RemoveContainer" containerID="cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb" Mar 12 23:48:35.079855 containerd[1543]: time="2026-03-12T23:48:35.079428048Z" level=info msg="RemoveContainer for \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\"" Mar 12 23:48:35.084802 containerd[1543]: time="2026-03-12T23:48:35.084690212Z" level=info msg="RemoveContainer for \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" returns successfully" Mar 12 23:48:35.085322 kubelet[2779]: I0312 23:48:35.085266 2779 scope.go:122] "RemoveContainer" containerID="53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b" Mar 12 23:48:35.086581 containerd[1543]: time="2026-03-12T23:48:35.086315934Z" level=error msg="ContainerStatus for \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\": not found" Mar 12 23:48:35.087509 kubelet[2779]: E0312 23:48:35.087249 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\": not found" containerID="53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b" Mar 12 23:48:35.087509 kubelet[2779]: I0312 23:48:35.087309 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b"} err="failed to get container status \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\": rpc error: code = NotFound desc = an error occurred when try to find container \"53862f39cda84bff37d7aadd4f16f2116d0fd46c45578cf2159a5fc713f2351b\": not found" Mar 12 23:48:35.087509 kubelet[2779]: I0312 23:48:35.087373 2779 scope.go:122] "RemoveContainer" containerID="e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143" Mar 12 23:48:35.087803 containerd[1543]: time="2026-03-12T23:48:35.087602063Z" level=error msg="ContainerStatus for \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\": not found" Mar 12 23:48:35.088331 kubelet[2779]: E0312 23:48:35.088159 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\": not found" containerID="e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143" Mar 12 23:48:35.088331 kubelet[2779]: I0312 23:48:35.088236 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143"} err="failed to get container status \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\": rpc error: code = NotFound desc = an error occurred when try to find container \"e24a4c123439f19e3d0fa9596c71bfef30dd7afbcc474b603fe879d94f2fa143\": not found" Mar 12 23:48:35.088331 kubelet[2779]: I0312 23:48:35.088264 2779 scope.go:122] "RemoveContainer" containerID="0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad" Mar 12 23:48:35.089000 containerd[1543]: time="2026-03-12T23:48:35.088927035Z" level=error msg="ContainerStatus for \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\": not found" Mar 12 23:48:35.089877 kubelet[2779]: E0312 23:48:35.089398 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\": not found" containerID="0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad" Mar 12 23:48:35.089877 kubelet[2779]: I0312 23:48:35.089423 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad"} err="failed to get container status \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"0583b00729dd9e0bf9a03cb87ca70b6a491be7b7fb4873b5fed25d6c6d4f62ad\": not found" Mar 12 23:48:35.089877 kubelet[2779]: I0312 23:48:35.089454 2779 scope.go:122] "RemoveContainer" containerID="34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c" Mar 12 23:48:35.089877 kubelet[2779]: E0312 23:48:35.089800 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\": not found" containerID="34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c" Mar 12 23:48:35.089877 kubelet[2779]: I0312 23:48:35.089849 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c"} err="failed to get container status \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\": not found" Mar 12 23:48:35.090030 containerd[1543]: time="2026-03-12T23:48:35.089632105Z" level=error msg="ContainerStatus for \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34c427c9cfc3927262ad2c57d0d1152704177d5b193f589ac20555ad687afc0c\": not found" Mar 12 23:48:35.090151 kubelet[2779]: I0312 23:48:35.089863 2779 scope.go:122] "RemoveContainer" containerID="cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb" Mar 12 23:48:35.090347 containerd[1543]: time="2026-03-12T23:48:35.090300652Z" level=error msg="ContainerStatus for \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\": not found" Mar 12 23:48:35.090487 kubelet[2779]: E0312 23:48:35.090461 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\": not found" containerID="cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb" Mar 12 23:48:35.090630 kubelet[2779]: I0312 23:48:35.090568 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb"} err="failed to get container status \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbf222644b68575d12c4b13af7949962559fb5ed45f90ac26641a3e94ef199bb\": not found" Mar 12 23:48:35.090630 kubelet[2779]: I0312 23:48:35.090596 2779 scope.go:122] "RemoveContainer" containerID="e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca" Mar 12 23:48:35.092418 containerd[1543]: time="2026-03-12T23:48:35.092389340Z" level=info msg="RemoveContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\"" Mar 12 23:48:35.096992 containerd[1543]: time="2026-03-12T23:48:35.096927713Z" level=info msg="RemoveContainer for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" returns successfully" Mar 12 23:48:35.097490 kubelet[2779]: I0312 23:48:35.097222 2779 scope.go:122] "RemoveContainer" containerID="e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca" Mar 12 23:48:35.097612 containerd[1543]: time="2026-03-12T23:48:35.097551935Z" level=error msg="ContainerStatus for \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\": not found" Mar 12 23:48:35.098124 kubelet[2779]: E0312 23:48:35.098043 2779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\": not found" containerID="e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca" Mar 12 23:48:35.098381 kubelet[2779]: I0312 23:48:35.098245 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca"} err="failed to get container status \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3786f0f0f5028ac9e80fb91ed7d21f23c7639f32fa80867f03fb356475119ca\": not found" Mar 12 23:48:35.104336 kubelet[2779]: I0312 23:48:35.104292 2779 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-xtables-lock\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104336 kubelet[2779]: I0312 23:48:35.104330 2779 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-net\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104336 kubelet[2779]: I0312 23:48:35.104343 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dw2qq\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-kube-api-access-dw2qq\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104353 2779 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cni-path\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104365 2779 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-cgroup\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104373 2779 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44489147-70fe-47c8-82b0-40df60011ce4-clustermesh-secrets\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104381 2779 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-lib-modules\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104390 2779 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44489147-70fe-47c8-82b0-40df60011ce4-cilium-config-path\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104398 2779 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44489147-70fe-47c8-82b0-40df60011ce4-hubble-tls\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104411 2779 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-host-proc-sys-kernel\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104538 kubelet[2779]: I0312 23:48:35.104419 2779 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-etc-cni-netd\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104904 kubelet[2779]: I0312 23:48:35.104428 2779 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-hostproc\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.104904 kubelet[2779]: I0312 23:48:35.104436 2779 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44489147-70fe-47c8-82b0-40df60011ce4-cilium-run\") on node \"ci-4459-2-4-n-d21c84289b\" DevicePath \"\"" Mar 12 23:48:35.398340 kubelet[2779]: I0312 23:48:35.398280 2779 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="44489147-70fe-47c8-82b0-40df60011ce4" path="/var/lib/kubelet/pods/44489147-70fe-47c8-82b0-40df60011ce4/volumes" Mar 12 23:48:35.399840 kubelet[2779]: I0312 23:48:35.399778 2779 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a87f2336-862c-4c61-8b69-b274919bad96" path="/var/lib/kubelet/pods/a87f2336-862c-4c61-8b69-b274919bad96/volumes" Mar 12 23:48:35.693187 systemd[1]: var-lib-kubelet-pods-44489147\x2d70fe\x2d47c8\x2d82b0\x2d40df60011ce4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddw2qq.mount: Deactivated successfully. Mar 12 23:48:35.693331 systemd[1]: var-lib-kubelet-pods-a87f2336\x2d862c\x2d4c61\x2d8b69\x2db274919bad96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dphrtk.mount: Deactivated successfully. Mar 12 23:48:35.693422 systemd[1]: var-lib-kubelet-pods-44489147\x2d70fe\x2d47c8\x2d82b0\x2d40df60011ce4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 23:48:35.693506 systemd[1]: var-lib-kubelet-pods-44489147\x2d70fe\x2d47c8\x2d82b0\x2d40df60011ce4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 23:48:36.532125 kubelet[2779]: E0312 23:48:36.532047 2779 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 23:48:36.607602 sshd[4319]: Connection closed by 20.161.92.111 port 42944 Mar 12 23:48:36.610045 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:36.614486 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Mar 12 23:48:36.616417 systemd[1]: sshd@20-159.69.55.216:22-20.161.92.111:42944.service: Deactivated successfully. Mar 12 23:48:36.621037 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 23:48:36.621328 systemd[1]: session-21.scope: Consumed 1.481s CPU time, 22.7M memory peak. Mar 12 23:48:36.623440 systemd-logind[1478]: Removed session 21. Mar 12 23:48:36.712478 systemd[1]: Started sshd@21-159.69.55.216:22-20.161.92.111:42954.service - OpenSSH per-connection server daemon (20.161.92.111:42954). Mar 12 23:48:37.239769 sshd[4463]: Accepted publickey for core from 20.161.92.111 port 42954 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:37.242275 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:37.248633 systemd-logind[1478]: New session 22 of user core. Mar 12 23:48:37.256133 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 23:48:37.395763 kubelet[2779]: E0312 23:48:37.394380 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-47vzx" podUID="481f0b52-84a8-4d95-87e3-dff397fedd27" Mar 12 23:48:38.392418 systemd[1]: Created slice kubepods-burstable-podd2b2a770_5aea_4c51_a0e3_0f5f95031820.slice - libcontainer container kubepods-burstable-podd2b2a770_5aea_4c51_a0e3_0f5f95031820.slice. Mar 12 23:48:38.447385 sshd[4466]: Connection closed by 20.161.92.111 port 42954 Mar 12 23:48:38.450064 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:38.454767 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Mar 12 23:48:38.455293 systemd[1]: sshd@21-159.69.55.216:22-20.161.92.111:42954.service: Deactivated successfully. Mar 12 23:48:38.459113 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 23:48:38.462757 systemd-logind[1478]: Removed session 22. Mar 12 23:48:38.529471 kubelet[2779]: I0312 23:48:38.529384 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-xtables-lock\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529541 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2b2a770-5aea-4c51-a0e3-0f5f95031820-cilium-ipsec-secrets\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529623 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2b2a770-5aea-4c51-a0e3-0f5f95031820-hubble-tls\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529678 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-hostproc\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529712 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-cilium-cgroup\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529773 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2b2a770-5aea-4c51-a0e3-0f5f95031820-clustermesh-secrets\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530107 kubelet[2779]: I0312 23:48:38.529804 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-lib-modules\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.529854 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-host-proc-sys-kernel\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.529892 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-bpf-maps\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.529935 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-host-proc-sys-net\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.529965 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-etc-cni-netd\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.530026 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc25l\" (UniqueName: \"kubernetes.io/projected/d2b2a770-5aea-4c51-a0e3-0f5f95031820-kube-api-access-tc25l\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530421 kubelet[2779]: I0312 23:48:38.530064 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-cilium-run\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530824 kubelet[2779]: I0312 23:48:38.530113 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2b2a770-5aea-4c51-a0e3-0f5f95031820-cni-path\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.530824 kubelet[2779]: I0312 23:48:38.530143 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2b2a770-5aea-4c51-a0e3-0f5f95031820-cilium-config-path\") pod \"cilium-ffnhw\" (UID: \"d2b2a770-5aea-4c51-a0e3-0f5f95031820\") " pod="kube-system/cilium-ffnhw" Mar 12 23:48:38.563784 systemd[1]: Started sshd@22-159.69.55.216:22-20.161.92.111:42970.service - OpenSSH per-connection server daemon (20.161.92.111:42970). Mar 12 23:48:38.700652 containerd[1543]: time="2026-03-12T23:48:38.699543153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffnhw,Uid:d2b2a770-5aea-4c51-a0e3-0f5f95031820,Namespace:kube-system,Attempt:0,}" Mar 12 23:48:38.723063 containerd[1543]: time="2026-03-12T23:48:38.722977817Z" level=info msg="connecting to shim 6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" namespace=k8s.io protocol=ttrpc version=3 Mar 12 23:48:38.748076 systemd[1]: Started cri-containerd-6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00.scope - libcontainer container 6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00. Mar 12 23:48:38.775283 containerd[1543]: time="2026-03-12T23:48:38.775177678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffnhw,Uid:d2b2a770-5aea-4c51-a0e3-0f5f95031820,Namespace:kube-system,Attempt:0,} returns sandbox id \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\"" Mar 12 23:48:38.781440 containerd[1543]: time="2026-03-12T23:48:38.781396940Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 23:48:38.788909 containerd[1543]: time="2026-03-12T23:48:38.788867768Z" level=info msg="Container 0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:38.794248 containerd[1543]: time="2026-03-12T23:48:38.794199021Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f\"" Mar 12 23:48:38.795974 containerd[1543]: time="2026-03-12T23:48:38.794878689Z" level=info msg="StartContainer for \"0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f\"" Mar 12 23:48:38.795974 containerd[1543]: time="2026-03-12T23:48:38.795712732Z" level=info msg="connecting to shim 0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" protocol=ttrpc version=3 Mar 12 23:48:38.816943 systemd[1]: Started cri-containerd-0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f.scope - libcontainer container 0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f. Mar 12 23:48:38.852384 containerd[1543]: time="2026-03-12T23:48:38.852332435Z" level=info msg="StartContainer for \"0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f\" returns successfully" Mar 12 23:48:38.864040 systemd[1]: cri-containerd-0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f.scope: Deactivated successfully. Mar 12 23:48:38.868902 containerd[1543]: time="2026-03-12T23:48:38.868752438Z" level=info msg="received container exit event container_id:\"0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f\" id:\"0313e7a3cf3602184adbb151777d021e3becb0c5e58566947d81908ebdcfa14f\" pid:4545 exited_at:{seconds:1773359318 nanos:868487411}" Mar 12 23:48:39.059663 containerd[1543]: time="2026-03-12T23:48:39.059540326Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 23:48:39.072429 containerd[1543]: time="2026-03-12T23:48:39.070787132Z" level=info msg="Container 6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:39.077738 containerd[1543]: time="2026-03-12T23:48:39.077607735Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86\"" Mar 12 23:48:39.078636 containerd[1543]: time="2026-03-12T23:48:39.078333728Z" level=info msg="StartContainer for \"6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86\"" Mar 12 23:48:39.080302 containerd[1543]: time="2026-03-12T23:48:39.080094504Z" level=info msg="connecting to shim 6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" protocol=ttrpc version=3 Mar 12 23:48:39.099983 systemd[1]: Started cri-containerd-6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86.scope - libcontainer container 6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86. Mar 12 23:48:39.115759 sshd[4479]: Accepted publickey for core from 20.161.92.111 port 42970 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:39.117254 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:39.124866 systemd-logind[1478]: New session 23 of user core. Mar 12 23:48:39.129140 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 23:48:39.153231 containerd[1543]: time="2026-03-12T23:48:39.153153259Z" level=info msg="StartContainer for \"6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86\" returns successfully" Mar 12 23:48:39.160141 systemd[1]: cri-containerd-6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86.scope: Deactivated successfully. Mar 12 23:48:39.163094 containerd[1543]: time="2026-03-12T23:48:39.162943439Z" level=info msg="received container exit event container_id:\"6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86\" id:\"6537f321b24c73921b381d57bd0ed4e7287657ab7dadf71e8a31b1fab2626d86\" pid:4591 exited_at:{seconds:1773359319 nanos:162541799}" Mar 12 23:48:39.395237 kubelet[2779]: E0312 23:48:39.395156 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-47vzx" podUID="481f0b52-84a8-4d95-87e3-dff397fedd27" Mar 12 23:48:39.406638 sshd[4597]: Connection closed by 20.161.92.111 port 42970 Mar 12 23:48:39.408952 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:39.413388 systemd-logind[1478]: Session 23 logged out. Waiting for processes to exit. Mar 12 23:48:39.413682 systemd[1]: sshd@22-159.69.55.216:22-20.161.92.111:42970.service: Deactivated successfully. Mar 12 23:48:39.416548 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 23:48:39.419635 systemd-logind[1478]: Removed session 23. Mar 12 23:48:39.509945 systemd[1]: Started sshd@23-159.69.55.216:22-20.161.92.111:42976.service - OpenSSH per-connection server daemon (20.161.92.111:42976). Mar 12 23:48:40.048317 sshd[4629]: Accepted publickey for core from 20.161.92.111 port 42976 ssh2: RSA SHA256:efFLS9MdSfnBpQoXIlctriWXrDgGS/o5pOWMaZl9Yd4 Mar 12 23:48:40.050697 sshd-session[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 23:48:40.058681 systemd-logind[1478]: New session 24 of user core. Mar 12 23:48:40.064104 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 23:48:40.067140 containerd[1543]: time="2026-03-12T23:48:40.064989926Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 23:48:40.082916 containerd[1543]: time="2026-03-12T23:48:40.082874437Z" level=info msg="Container 19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:40.093059 containerd[1543]: time="2026-03-12T23:48:40.093018908Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7\"" Mar 12 23:48:40.093862 containerd[1543]: time="2026-03-12T23:48:40.093824025Z" level=info msg="StartContainer for \"19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7\"" Mar 12 23:48:40.098447 containerd[1543]: time="2026-03-12T23:48:40.098396552Z" level=info msg="connecting to shim 19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" protocol=ttrpc version=3 Mar 12 23:48:40.120994 systemd[1]: Started cri-containerd-19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7.scope - libcontainer container 19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7. Mar 12 23:48:40.189021 containerd[1543]: time="2026-03-12T23:48:40.188963588Z" level=info msg="StartContainer for \"19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7\" returns successfully" Mar 12 23:48:40.190898 systemd[1]: cri-containerd-19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7.scope: Deactivated successfully. Mar 12 23:48:40.194232 containerd[1543]: time="2026-03-12T23:48:40.194112216Z" level=info msg="received container exit event container_id:\"19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7\" id:\"19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7\" pid:4646 exited_at:{seconds:1773359320 nanos:193023089}" Mar 12 23:48:40.644855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19ae6896934ade90025e000de40da9b8770a529f34d2b6df0de78c79693cccd7-rootfs.mount: Deactivated successfully. Mar 12 23:48:41.079817 containerd[1543]: time="2026-03-12T23:48:41.076471608Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 23:48:41.100906 containerd[1543]: time="2026-03-12T23:48:41.100865156Z" level=info msg="Container fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:41.112607 containerd[1543]: time="2026-03-12T23:48:41.112553861Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638\"" Mar 12 23:48:41.114237 containerd[1543]: time="2026-03-12T23:48:41.113992436Z" level=info msg="StartContainer for \"fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638\"" Mar 12 23:48:41.116243 containerd[1543]: time="2026-03-12T23:48:41.116214813Z" level=info msg="connecting to shim fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" protocol=ttrpc version=3 Mar 12 23:48:41.153971 systemd[1]: Started cri-containerd-fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638.scope - libcontainer container fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638. Mar 12 23:48:41.212017 systemd[1]: cri-containerd-fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638.scope: Deactivated successfully. Mar 12 23:48:41.214258 containerd[1543]: time="2026-03-12T23:48:41.214215242Z" level=info msg="received container exit event container_id:\"fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638\" id:\"fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638\" pid:4691 exited_at:{seconds:1773359321 nanos:213335930}" Mar 12 23:48:41.217871 containerd[1543]: time="2026-03-12T23:48:41.217820519Z" level=info msg="StartContainer for \"fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638\" returns successfully" Mar 12 23:48:41.239019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc90bbc6c35ccc734553238b20b4139d9eb3d0abb5cc092f4ca67b7434e85638-rootfs.mount: Deactivated successfully. Mar 12 23:48:41.395486 kubelet[2779]: E0312 23:48:41.394718 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-47vzx" podUID="481f0b52-84a8-4d95-87e3-dff397fedd27" Mar 12 23:48:41.534106 kubelet[2779]: E0312 23:48:41.534014 2779 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 23:48:42.079782 containerd[1543]: time="2026-03-12T23:48:42.079265869Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 23:48:42.095956 containerd[1543]: time="2026-03-12T23:48:42.095905324Z" level=info msg="Container 614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:48:42.106809 containerd[1543]: time="2026-03-12T23:48:42.106747584Z" level=info msg="CreateContainer within sandbox \"6317677768d1b4a92c7a9e583be6fee891c6883bb793b6e45ac852569b81df00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f\"" Mar 12 23:48:42.108106 containerd[1543]: time="2026-03-12T23:48:42.107650496Z" level=info msg="StartContainer for \"614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f\"" Mar 12 23:48:42.109154 containerd[1543]: time="2026-03-12T23:48:42.109112353Z" level=info msg="connecting to shim 614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f" address="unix:///run/containerd/s/cf800bca79b48939a21d9210beb37c389ee99496f7f84f44f684a6dea55e89b5" protocol=ttrpc version=3 Mar 12 23:48:42.140057 systemd[1]: Started cri-containerd-614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f.scope - libcontainer container 614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f. Mar 12 23:48:42.197851 containerd[1543]: time="2026-03-12T23:48:42.197772892Z" level=info msg="StartContainer for \"614d298e716a77fe94cc2b3d26b8d6eb80ad0399d28aa05941fd6d61e7c6692f\" returns successfully" Mar 12 23:48:42.513773 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 12 23:48:43.111631 kubelet[2779]: I0312 23:48:43.111192 2779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-ffnhw" podStartSLOduration=5.111177043 podStartE2EDuration="5.111177043s" podCreationTimestamp="2026-03-12 23:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 23:48:43.10878439 +0000 UTC m=+191.846580164" watchObservedRunningTime="2026-03-12 23:48:43.111177043 +0000 UTC m=+191.848972697" Mar 12 23:48:43.395838 kubelet[2779]: E0312 23:48:43.394174 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-47vzx" podUID="481f0b52-84a8-4d95-87e3-dff397fedd27" Mar 12 23:48:45.398521 kubelet[2779]: E0312 23:48:45.398130 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-47vzx" podUID="481f0b52-84a8-4d95-87e3-dff397fedd27" Mar 12 23:48:45.467624 systemd-networkd[1432]: lxc_health: Link UP Mar 12 23:48:45.474679 systemd-networkd[1432]: lxc_health: Gained carrier Mar 12 23:48:46.033929 kubelet[2779]: I0312 23:48:46.033867 2779 setters.go:546] "Node became not ready" node="ci-4459-2-4-n-d21c84289b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T23:48:46Z","lastTransitionTime":"2026-03-12T23:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 23:48:46.667003 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 12 23:48:51.129539 sshd[4632]: Connection closed by 20.161.92.111 port 42976 Mar 12 23:48:51.133246 sshd-session[4629]: pam_unix(sshd:session): session closed for user core Mar 12 23:48:51.138370 systemd[1]: sshd@23-159.69.55.216:22-20.161.92.111:42976.service: Deactivated successfully. Mar 12 23:48:51.141110 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 23:48:51.142723 systemd-logind[1478]: Session 24 logged out. Waiting for processes to exit. Mar 12 23:48:51.145276 systemd-logind[1478]: Removed session 24. Mar 12 23:49:05.499614 kubelet[2779]: E0312 23:49:05.499390 2779 controller.go:251] "Failed to update lease" err="Put \"https://159.69.55.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-d21c84289b?timeout=10s\": context deadline exceeded" Mar 12 23:49:05.721404 systemd[1]: cri-containerd-03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c.scope: Deactivated successfully. Mar 12 23:49:05.723382 systemd[1]: cri-containerd-03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c.scope: Consumed 3.277s CPU time, 55.1M memory peak. Mar 12 23:49:05.727019 containerd[1543]: time="2026-03-12T23:49:05.726785737Z" level=info msg="received container exit event container_id:\"03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c\" id:\"03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c\" pid:2600 exit_status:1 exited_at:{seconds:1773359345 nanos:726306238}" Mar 12 23:49:05.744101 kubelet[2779]: E0312 23:49:05.743202 2779 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38594->10.0.0.2:2379: read: connection timed out" Mar 12 23:49:05.759010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c-rootfs.mount: Deactivated successfully. Mar 12 23:49:06.151841 kubelet[2779]: E0312 23:49:06.151601 2779 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T23:48:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T23:48:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T23:48:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T23:48:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T23:48:56Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4459-2-4-n-d21c84289b\": Patch \"https://159.69.55.216:6443/api/v1/nodes/ci-4459-2-4-n-d21c84289b/status?timeout=10s\": context deadline exceeded" Mar 12 23:49:06.159249 kubelet[2779]: I0312 23:49:06.159188 2779 scope.go:122] "RemoveContainer" containerID="03467d72f00a810faa73b1fe7f9fe9d9bf0e312f01ab23ccba7f538b5813815c" Mar 12 23:49:06.162375 containerd[1543]: time="2026-03-12T23:49:06.162332040Z" level=info msg="CreateContainer within sandbox \"da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 12 23:49:06.171605 containerd[1543]: time="2026-03-12T23:49:06.170985040Z" level=info msg="Container 557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:49:06.179876 containerd[1543]: time="2026-03-12T23:49:06.179816751Z" level=info msg="CreateContainer within sandbox \"da14013c95f8e2376913bc21ec5f3d10e56d30afbe6443462f1dca1f6aee20ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c\"" Mar 12 23:49:06.180599 containerd[1543]: time="2026-03-12T23:49:06.180568120Z" level=info msg="StartContainer for \"557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c\"" Mar 12 23:49:06.181743 containerd[1543]: time="2026-03-12T23:49:06.181670914Z" level=info msg="connecting to shim 557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c" address="unix:///run/containerd/s/f6e2a7ad49d69404472b4cb877ffaa2430d5d9c1c2b1225228210c775e10f206" protocol=ttrpc version=3 Mar 12 23:49:06.206009 systemd[1]: Started cri-containerd-557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c.scope - libcontainer container 557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c. Mar 12 23:49:06.253127 containerd[1543]: time="2026-03-12T23:49:06.253088255Z" level=info msg="StartContainer for \"557f3308a475b8dd95c7a87334cbe240e95a10b6d27d7346c8d5cd049e7b2f6c\" returns successfully" Mar 12 23:49:10.572496 systemd[1]: cri-containerd-837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4.scope: Deactivated successfully. Mar 12 23:49:10.573595 systemd[1]: cri-containerd-837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4.scope: Consumed 3.051s CPU time, 20.8M memory peak. Mar 12 23:49:10.575513 containerd[1543]: time="2026-03-12T23:49:10.575374564Z" level=info msg="received container exit event container_id:\"837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4\" id:\"837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4\" pid:2640 exit_status:1 exited_at:{seconds:1773359350 nanos:574886781}" Mar 12 23:49:10.600257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4-rootfs.mount: Deactivated successfully. Mar 12 23:49:11.179079 kubelet[2779]: I0312 23:49:11.179023 2779 scope.go:122] "RemoveContainer" containerID="837d9ef7fc0a6112b43282556efa47e91b2d35e17dfbc9cae2fb6462dc0520f4" Mar 12 23:49:11.181483 containerd[1543]: time="2026-03-12T23:49:11.181429153Z" level=info msg="CreateContainer within sandbox \"c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 12 23:49:11.195747 containerd[1543]: time="2026-03-12T23:49:11.195497456Z" level=info msg="Container d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f: CDI devices from CRI Config.CDIDevices: []" Mar 12 23:49:11.204050 containerd[1543]: time="2026-03-12T23:49:11.203979101Z" level=info msg="CreateContainer within sandbox \"c4d34a33f5230d466074794327236ea531de8643ace3ed1104d87fe709315326\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f\"" Mar 12 23:49:11.204680 containerd[1543]: time="2026-03-12T23:49:11.204644799Z" level=info msg="StartContainer for \"d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f\"" Mar 12 23:49:11.206492 containerd[1543]: time="2026-03-12T23:49:11.206423981Z" level=info msg="connecting to shim d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f" address="unix:///run/containerd/s/75c233071d902cc08698eb220b2779b3b577438a3811fef9ac89b76cd41b10b7" protocol=ttrpc version=3 Mar 12 23:49:11.224940 systemd[1]: Started cri-containerd-d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f.scope - libcontainer container d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f. Mar 12 23:49:11.264140 containerd[1543]: time="2026-03-12T23:49:11.264013671Z" level=info msg="StartContainer for \"d93287fd061f014e42374fabea8bf0539e99729c822e0f8eb4fb782deb872d9f\" returns successfully"