Apr 13 19:22:08.890304 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:22:08.890332 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:22:08.890344 kernel: KASLR enabled Apr 13 19:22:08.890350 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 19:22:08.890357 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 13 19:22:08.890363 kernel: random: crng init done Apr 13 19:22:08.890371 kernel: ACPI: Early table checksum verification disabled Apr 13 19:22:08.890377 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 13 19:22:08.890385 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 13 19:22:08.890393 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890401 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890407 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890414 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890421 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890429 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890438 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890445 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.890452 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 19:22:08.892125 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 19:22:08.892146 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 13 19:22:08.892153 kernel: NUMA: Failed to initialise from firmware Apr 13 19:22:08.892160 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:22:08.892166 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 13 19:22:08.892173 kernel: Zone ranges: Apr 13 19:22:08.892179 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:22:08.892192 kernel: DMA32 empty Apr 13 19:22:08.892199 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 13 19:22:08.892205 kernel: Movable zone start for each node Apr 13 19:22:08.892212 kernel: Early memory node ranges Apr 13 19:22:08.892218 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 13 19:22:08.892225 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 13 19:22:08.892231 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 13 19:22:08.892237 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 13 19:22:08.892244 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 13 19:22:08.892250 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 13 19:22:08.892256 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 13 19:22:08.892263 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 13 19:22:08.892270 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 19:22:08.892277 kernel: psci: probing for conduit method from ACPI. Apr 13 19:22:08.892283 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:22:08.892293 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:22:08.892299 kernel: psci: Trusted OS migration not required Apr 13 19:22:08.892306 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:22:08.892315 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 13 19:22:08.892322 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:22:08.892329 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:22:08.892336 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:22:08.892343 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:22:08.892350 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:22:08.892357 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:22:08.892363 kernel: CPU features: detected: Spectre-v4 Apr 13 19:22:08.892370 kernel: CPU features: detected: Spectre-BHB Apr 13 19:22:08.892377 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:22:08.892385 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:22:08.892392 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:22:08.892399 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:22:08.892406 kernel: alternatives: applying boot alternatives Apr 13 19:22:08.892414 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:22:08.892421 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:22:08.892428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:22:08.892435 kernel: Fallback order for Node 0: 0 Apr 13 19:22:08.892441 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 13 19:22:08.892448 kernel: Policy zone: Normal Apr 13 19:22:08.892455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:22:08.892463 kernel: software IO TLB: area num 2. Apr 13 19:22:08.892470 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 13 19:22:08.892477 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 13 19:22:08.892484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:22:08.892491 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:22:08.892499 kernel: rcu: RCU event tracing is enabled. Apr 13 19:22:08.892506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:22:08.892513 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:22:08.892520 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:22:08.892526 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:22:08.892534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:22:08.892540 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:22:08.892549 kernel: GICv3: 256 SPIs implemented Apr 13 19:22:08.892556 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:22:08.892563 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:22:08.892569 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 13 19:22:08.892576 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 13 19:22:08.892583 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 13 19:22:08.892590 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:22:08.892597 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:22:08.892603 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 13 19:22:08.892611 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 13 19:22:08.892618 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:22:08.892626 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:22:08.892633 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:22:08.892640 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:22:08.892647 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:22:08.892654 kernel: Console: colour dummy device 80x25 Apr 13 19:22:08.892661 kernel: ACPI: Core revision 20230628 Apr 13 19:22:08.892668 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:22:08.892676 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:22:08.892683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:22:08.892690 kernel: landlock: Up and running. Apr 13 19:22:08.892707 kernel: SELinux: Initializing. Apr 13 19:22:08.892714 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:22:08.892721 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:22:08.892728 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:22:08.892736 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:22:08.892743 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:22:08.892750 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:22:08.892757 kernel: Platform MSI: ITS@0x8080000 domain created Apr 13 19:22:08.892764 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 13 19:22:08.892773 kernel: Remapping and enabling EFI services. Apr 13 19:22:08.892780 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:22:08.892787 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:22:08.892794 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 13 19:22:08.892801 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 13 19:22:08.892808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:22:08.892815 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:22:08.892822 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:22:08.892829 kernel: SMP: Total of 2 processors activated. Apr 13 19:22:08.892836 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:22:08.892845 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:22:08.892858 kernel: CPU features: detected: Common not Private translations Apr 13 19:22:08.892871 kernel: CPU features: detected: CRC32 instructions Apr 13 19:22:08.892880 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 13 19:22:08.892887 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:22:08.892895 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:22:08.892902 kernel: CPU features: detected: Privileged Access Never Apr 13 19:22:08.892909 kernel: CPU features: detected: RAS Extension Support Apr 13 19:22:08.892918 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 13 19:22:08.892926 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:22:08.892933 kernel: alternatives: applying system-wide alternatives Apr 13 19:22:08.892940 kernel: devtmpfs: initialized Apr 13 19:22:08.892948 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:22:08.892956 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:22:08.892963 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:22:08.892971 kernel: SMBIOS 3.0.0 present. Apr 13 19:22:08.892979 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 13 19:22:08.892987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:22:08.892994 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:22:08.893002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:22:08.893010 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:22:08.893017 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:22:08.893098 kernel: audit: type=2000 audit(0.010:1): state=initialized audit_enabled=0 res=1 Apr 13 19:22:08.893106 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:22:08.893113 kernel: cpuidle: using governor menu Apr 13 19:22:08.893123 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:22:08.893130 kernel: ASID allocator initialised with 32768 entries Apr 13 19:22:08.893138 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:22:08.893145 kernel: Serial: AMBA PL011 UART driver Apr 13 19:22:08.893152 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:22:08.893160 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:22:08.893167 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:22:08.893174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:22:08.893182 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:22:08.893190 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:22:08.893202 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:22:08.893214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:22:08.893223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:22:08.893233 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:22:08.893240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:22:08.893248 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:22:08.893255 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:22:08.893263 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:22:08.893272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:22:08.893280 kernel: ACPI: Interpreter enabled Apr 13 19:22:08.893287 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:22:08.893294 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:22:08.893302 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:22:08.893309 kernel: printk: console [ttyAMA0] enabled Apr 13 19:22:08.893317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 19:22:08.893509 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:22:08.893587 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:22:08.893654 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:22:08.893777 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 13 19:22:08.893848 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 13 19:22:08.893858 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 13 19:22:08.893866 kernel: PCI host bridge to bus 0000:00 Apr 13 19:22:08.893937 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 13 19:22:08.893997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:22:08.894076 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 13 19:22:08.894136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 19:22:08.894225 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 13 19:22:08.894312 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 13 19:22:08.894382 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 13 19:22:08.894448 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:22:08.894528 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.894594 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 13 19:22:08.894669 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.894750 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 13 19:22:08.894828 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.894895 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 13 19:22:08.894982 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897189 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 13 19:22:08.897296 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897365 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 13 19:22:08.897438 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897509 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 13 19:22:08.897592 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897658 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 13 19:22:08.897749 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897819 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 13 19:22:08.897891 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 19:22:08.897960 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 13 19:22:08.898587 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 13 19:22:08.898683 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 13 19:22:08.900179 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:22:08.900270 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 13 19:22:08.900339 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:22:08.900407 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:22:08.900491 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 19:22:08.900566 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 13 19:22:08.900642 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 19:22:08.900724 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 13 19:22:08.900803 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 13 19:22:08.900883 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 19:22:08.900952 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 13 19:22:08.902631 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 19:22:08.902760 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 13 19:22:08.902831 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 13 19:22:08.902908 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 19:22:08.902979 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 13 19:22:08.903069 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:22:08.903153 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 19:22:08.903221 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 13 19:22:08.903289 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 13 19:22:08.903355 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 19:22:08.903426 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 13 19:22:08.903490 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:22:08.903556 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 13 19:22:08.903628 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 13 19:22:08.903694 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 13 19:22:08.903804 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 13 19:22:08.903879 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 19:22:08.903944 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:22:08.904008 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 13 19:22:08.906318 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 19:22:08.906414 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 13 19:22:08.906486 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 13 19:22:08.906561 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 19:22:08.906627 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:22:08.906690 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 13 19:22:08.906778 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 19:22:08.906844 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:22:08.906910 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 13 19:22:08.906983 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 19:22:08.907064 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:22:08.907129 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 13 19:22:08.907202 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 19:22:08.907267 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:22:08.907331 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 13 19:22:08.907402 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 19:22:08.907467 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:22:08.907535 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 13 19:22:08.907615 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 13 19:22:08.907683 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:08.907769 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 13 19:22:08.907839 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:08.907907 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 13 19:22:08.907972 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:08.908902 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 13 19:22:08.909001 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:08.909099 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 13 19:22:08.909170 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:08.909239 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 13 19:22:08.909304 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:08.909377 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 13 19:22:08.909517 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:08.909605 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 13 19:22:08.909674 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:08.909764 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 13 19:22:08.909837 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:08.909909 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 13 19:22:08.909981 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 13 19:22:08.910078 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 13 19:22:08.910167 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 19:22:08.910241 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 13 19:22:08.910306 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 19:22:08.910383 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 13 19:22:08.910449 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 19:22:08.910516 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 13 19:22:08.910587 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 19:22:08.910655 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 13 19:22:08.910733 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 19:22:08.910802 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 13 19:22:08.910868 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 19:22:08.910934 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 13 19:22:08.910999 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 19:22:08.913772 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 13 19:22:08.913884 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 19:22:08.913953 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 13 19:22:08.914020 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 13 19:22:08.914784 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 13 19:22:08.914867 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 13 19:22:08.914937 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 13 19:22:08.915005 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 13 19:22:08.915088 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 19:22:08.915163 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 19:22:08.915227 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 13 19:22:08.915293 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:08.915367 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 13 19:22:08.915442 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 19:22:08.915511 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 19:22:08.915596 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 13 19:22:08.915663 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:08.915752 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 13 19:22:08.915823 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 13 19:22:08.915891 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 19:22:08.915957 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 19:22:08.916096 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 13 19:22:08.916170 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:08.916243 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 13 19:22:08.916309 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 19:22:08.916372 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 19:22:08.916435 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 13 19:22:08.916501 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:08.916575 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 13 19:22:08.916648 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 13 19:22:08.916760 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 19:22:08.916837 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 19:22:08.916902 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 13 19:22:08.916967 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:08.917117 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 13 19:22:08.917194 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 13 19:22:08.917262 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 19:22:08.917332 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 19:22:08.917395 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 13 19:22:08.917461 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:08.917536 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 13 19:22:08.917610 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 13 19:22:08.917679 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 13 19:22:08.917780 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 19:22:08.917849 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 19:22:08.917918 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 13 19:22:08.917983 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:08.918148 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 19:22:08.918220 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 19:22:08.918285 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 13 19:22:08.918349 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:08.918415 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 19:22:08.918479 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 13 19:22:08.918547 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 13 19:22:08.918610 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:08.918676 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 13 19:22:08.918748 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:22:08.918808 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 13 19:22:08.918879 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 19:22:08.918940 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 13 19:22:08.919002 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 13 19:22:08.919129 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 13 19:22:08.919191 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 13 19:22:08.919250 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 13 19:22:08.919317 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 13 19:22:08.919375 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 13 19:22:08.919439 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 13 19:22:08.919516 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 13 19:22:08.919577 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 13 19:22:08.919651 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 13 19:22:08.919730 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 13 19:22:08.919792 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 13 19:22:08.919850 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 13 19:22:08.919920 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 13 19:22:08.919980 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 13 19:22:08.920066 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 13 19:22:08.920136 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 13 19:22:08.920200 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 13 19:22:08.920259 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 13 19:22:08.920326 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 13 19:22:08.920386 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 13 19:22:08.920445 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 13 19:22:08.920517 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 13 19:22:08.920577 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 13 19:22:08.920638 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 13 19:22:08.920648 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:22:08.920657 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:22:08.920665 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:22:08.920672 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:22:08.920680 kernel: iommu: Default domain type: Translated Apr 13 19:22:08.920688 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:22:08.920732 kernel: efivars: Registered efivars operations Apr 13 19:22:08.920744 kernel: vgaarb: loaded Apr 13 19:22:08.920756 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:22:08.920764 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:22:08.920772 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:22:08.920781 kernel: pnp: PnP ACPI init Apr 13 19:22:08.920874 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 13 19:22:08.920886 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:22:08.920894 kernel: NET: Registered PF_INET protocol family Apr 13 19:22:08.920902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:22:08.920913 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:22:08.920921 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:22:08.920930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:22:08.920937 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:22:08.920945 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:22:08.920954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:22:08.920962 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:22:08.920969 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:22:08.921071 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 13 19:22:08.921086 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:22:08.921094 kernel: kvm [1]: HYP mode not available Apr 13 19:22:08.921102 kernel: Initialise system trusted keyrings Apr 13 19:22:08.921110 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:22:08.921118 kernel: Key type asymmetric registered Apr 13 19:22:08.921125 kernel: Asymmetric key parser 'x509' registered Apr 13 19:22:08.921133 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:22:08.921141 kernel: io scheduler mq-deadline registered Apr 13 19:22:08.921149 kernel: io scheduler kyber registered Apr 13 19:22:08.921158 kernel: io scheduler bfq registered Apr 13 19:22:08.921167 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:22:08.921239 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 13 19:22:08.921306 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 13 19:22:08.921372 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.921440 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 13 19:22:08.921509 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 13 19:22:08.921580 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.921648 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 13 19:22:08.921732 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 13 19:22:08.921801 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.921871 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 13 19:22:08.921937 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 13 19:22:08.922006 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.922155 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 13 19:22:08.922223 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 13 19:22:08.922288 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.922355 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 13 19:22:08.922423 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 13 19:22:08.922492 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.922561 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 13 19:22:08.922627 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 13 19:22:08.922691 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.922806 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 13 19:22:08.922873 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 13 19:22:08.922943 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.922953 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 13 19:22:08.923017 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 13 19:22:08.923191 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 13 19:22:08.923257 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 19:22:08.923268 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:22:08.923281 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:22:08.923295 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:22:08.923368 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 13 19:22:08.923439 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 13 19:22:08.923450 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:22:08.923459 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:22:08.923527 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 13 19:22:08.923537 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 13 19:22:08.923545 kernel: thunder_xcv, ver 1.0 Apr 13 19:22:08.923556 kernel: thunder_bgx, ver 1.0 Apr 13 19:22:08.923563 kernel: nicpf, ver 1.0 Apr 13 19:22:08.923571 kernel: nicvf, ver 1.0 Apr 13 19:22:08.923646 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:22:08.923794 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:22:08 UTC (1776108128) Apr 13 19:22:08.923810 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:22:08.923819 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 13 19:22:08.923827 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:22:08.923838 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:22:08.923849 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:22:08.923859 kernel: Segment Routing with IPv6 Apr 13 19:22:08.923868 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:22:08.923877 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:22:08.923885 kernel: Key type dns_resolver registered Apr 13 19:22:08.923893 kernel: registered taskstats version 1 Apr 13 19:22:08.923901 kernel: Loading compiled-in X.509 certificates Apr 13 19:22:08.923909 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:22:08.923918 kernel: Key type .fscrypt registered Apr 13 19:22:08.923926 kernel: Key type fscrypt-provisioning registered Apr 13 19:22:08.923934 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:22:08.923942 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:22:08.923950 kernel: ima: No architecture policies found Apr 13 19:22:08.923957 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:22:08.923965 kernel: clk: Disabling unused clocks Apr 13 19:22:08.923973 kernel: Freeing unused kernel memory: 39424K Apr 13 19:22:08.923981 kernel: Run /init as init process Apr 13 19:22:08.923989 kernel: with arguments: Apr 13 19:22:08.923999 kernel: /init Apr 13 19:22:08.924006 kernel: with environment: Apr 13 19:22:08.924013 kernel: HOME=/ Apr 13 19:22:08.924021 kernel: TERM=linux Apr 13 19:22:08.924042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:22:08.924053 systemd[1]: Detected virtualization kvm. Apr 13 19:22:08.924061 systemd[1]: Detected architecture arm64. Apr 13 19:22:08.924071 systemd[1]: Running in initrd. Apr 13 19:22:08.924079 systemd[1]: No hostname configured, using default hostname. Apr 13 19:22:08.924087 systemd[1]: Hostname set to . Apr 13 19:22:08.924096 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:22:08.924104 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:22:08.924112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:08.924121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:08.924130 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:22:08.924140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:22:08.924149 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:22:08.924157 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:22:08.924167 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:22:08.924175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:22:08.924184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:08.924192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:08.924202 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:22:08.924211 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:22:08.924219 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:22:08.924227 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:22:08.924235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:22:08.924243 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:22:08.924252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:22:08.924260 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:22:08.924268 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:08.924278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:08.924287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:08.924295 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:22:08.924303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:22:08.924311 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:22:08.924321 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:22:08.924330 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:22:08.924338 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:22:08.924349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:22:08.924357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:08.924365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:22:08.924398 systemd-journald[237]: Collecting audit messages is disabled. Apr 13 19:22:08.924420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:08.924429 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:22:08.924438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:22:08.924447 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:22:08.924456 kernel: Bridge firewalling registered Apr 13 19:22:08.924465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:08.924473 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:08.924482 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:22:08.924491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:08.924500 systemd-journald[237]: Journal started Apr 13 19:22:08.924520 systemd-journald[237]: Runtime Journal (/run/log/journal/ab1f80176308422e870a89f61e84acae) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:22:08.891095 systemd-modules-load[238]: Inserted module 'overlay' Apr 13 19:22:08.911099 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 13 19:22:08.936044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:22:08.940046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:22:08.943152 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:22:08.948126 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:08.960241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:22:08.961269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:08.963320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:08.969234 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:22:08.975558 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:08.985738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:22:08.992901 dracut-cmdline[271]: dracut-dracut-053 Apr 13 19:22:08.997995 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:22:09.021732 systemd-resolved[273]: Positive Trust Anchors: Apr 13 19:22:09.021756 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:22:09.021788 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:22:09.027356 systemd-resolved[273]: Defaulting to hostname 'linux'. Apr 13 19:22:09.028459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:22:09.029198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:09.130117 kernel: SCSI subsystem initialized Apr 13 19:22:09.136108 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:22:09.145521 kernel: iscsi: registered transport (tcp) Apr 13 19:22:09.159793 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:22:09.159897 kernel: QLogic iSCSI HBA Driver Apr 13 19:22:09.210428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:22:09.218267 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:22:09.240303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:22:09.240434 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:22:09.240463 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:22:09.291075 kernel: raid6: neonx8 gen() 15599 MB/s Apr 13 19:22:09.308103 kernel: raid6: neonx4 gen() 13331 MB/s Apr 13 19:22:09.325072 kernel: raid6: neonx2 gen() 13164 MB/s Apr 13 19:22:09.342101 kernel: raid6: neonx1 gen() 10438 MB/s Apr 13 19:22:09.359088 kernel: raid6: int64x8 gen() 6915 MB/s Apr 13 19:22:09.376090 kernel: raid6: int64x4 gen() 7324 MB/s Apr 13 19:22:09.393077 kernel: raid6: int64x2 gen() 6102 MB/s Apr 13 19:22:09.410080 kernel: raid6: int64x1 gen() 5039 MB/s Apr 13 19:22:09.410181 kernel: raid6: using algorithm neonx8 gen() 15599 MB/s Apr 13 19:22:09.427110 kernel: raid6: .... xor() 11934 MB/s, rmw enabled Apr 13 19:22:09.427217 kernel: raid6: using neon recovery algorithm Apr 13 19:22:09.432070 kernel: xor: measuring software checksum speed Apr 13 19:22:09.432134 kernel: 8regs : 19092 MB/sec Apr 13 19:22:09.433242 kernel: 32regs : 18011 MB/sec Apr 13 19:22:09.433275 kernel: arm64_neon : 27016 MB/sec Apr 13 19:22:09.433295 kernel: xor: using function: arm64_neon (27016 MB/sec) Apr 13 19:22:09.484086 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:22:09.498980 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:22:09.505398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:09.520445 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 13 19:22:09.523892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:09.534405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:22:09.549526 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Apr 13 19:22:09.585104 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:22:09.590213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:22:09.654085 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:09.660276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:22:09.682888 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:22:09.687486 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:22:09.688289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:09.691554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:22:09.697288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:22:09.725223 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:22:09.754051 kernel: scsi host0: Virtio SCSI HBA Apr 13 19:22:09.763056 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 19:22:09.763140 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 19:22:09.789168 kernel: ACPI: bus type USB registered Apr 13 19:22:09.791323 kernel: usbcore: registered new interface driver usbfs Apr 13 19:22:09.791385 kernel: usbcore: registered new interface driver hub Apr 13 19:22:09.791396 kernel: usbcore: registered new device driver usb Apr 13 19:22:09.805243 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 13 19:22:09.807055 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 13 19:22:09.807246 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:22:09.807552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:22:09.809145 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 13 19:22:09.807664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:09.810283 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:09.810915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:22:09.811143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:09.812527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:09.820353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:09.838048 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:22:09.838261 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 19:22:09.841113 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 19:22:09.844059 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 19:22:09.844264 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 19:22:09.844352 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 19:22:09.847073 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 13 19:22:09.847292 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 13 19:22:09.846009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:09.850764 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 13 19:22:09.850924 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 13 19:22:09.851015 kernel: hub 1-0:1.0: USB hub found Apr 13 19:22:09.851150 kernel: hub 1-0:1.0: 4 ports detected Apr 13 19:22:09.851233 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 19:22:09.853292 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:22:09.860039 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 19:22:09.860260 kernel: hub 2-0:1.0: USB hub found Apr 13 19:22:09.860358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:22:09.860369 kernel: GPT:17805311 != 80003071 Apr 13 19:22:09.860378 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:22:09.860388 kernel: GPT:17805311 != 80003071 Apr 13 19:22:09.860405 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:22:09.860415 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:09.862074 kernel: hub 2-0:1.0: 4 ports detected Apr 13 19:22:09.862249 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 13 19:22:09.890821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:09.920955 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (503) Apr 13 19:22:09.921020 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (505) Apr 13 19:22:09.937162 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 19:22:09.937872 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 19:22:09.946192 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 19:22:09.951243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 19:22:09.958705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:22:09.964293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:22:09.973760 disk-uuid[575]: Primary Header is updated. Apr 13 19:22:09.973760 disk-uuid[575]: Secondary Entries is updated. Apr 13 19:22:09.973760 disk-uuid[575]: Secondary Header is updated. Apr 13 19:22:09.983047 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:09.988052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:09.993236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:10.093044 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 19:22:10.229160 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 13 19:22:10.229250 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 19:22:10.229600 kernel: usbcore: registered new interface driver usbhid Apr 13 19:22:10.230217 kernel: usbhid: USB HID core driver Apr 13 19:22:10.341166 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 13 19:22:10.471066 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 13 19:22:10.525650 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 13 19:22:10.997104 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:22:10.998052 disk-uuid[576]: The operation has completed successfully. Apr 13 19:22:11.046008 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:22:11.046169 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:22:11.062295 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:22:11.076936 sh[594]: Success Apr 13 19:22:11.090163 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:22:11.139239 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:22:11.150157 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:22:11.153159 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:22:11.181484 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:22:11.181560 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:11.181596 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:22:11.182205 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:22:11.182253 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:22:11.189083 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:22:11.190573 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:22:11.193888 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:22:11.201339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:22:11.205214 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:22:11.220770 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:11.220841 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:11.220853 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:11.228459 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:11.228538 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:11.241954 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:22:11.242775 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:11.250177 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:22:11.257332 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:22:11.358078 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:22:11.364306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:22:11.369586 ignition[694]: Ignition 2.19.0 Apr 13 19:22:11.369814 ignition[694]: Stage: fetch-offline Apr 13 19:22:11.369855 ignition[694]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:11.374475 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:22:11.369863 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:11.370040 ignition[694]: parsed url from cmdline: "" Apr 13 19:22:11.370043 ignition[694]: no config URL provided Apr 13 19:22:11.370048 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:22:11.370055 ignition[694]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:22:11.370061 ignition[694]: failed to fetch config: resource requires networking Apr 13 19:22:11.370271 ignition[694]: Ignition finished successfully Apr 13 19:22:11.399115 systemd-networkd[780]: lo: Link UP Apr 13 19:22:11.399120 systemd-networkd[780]: lo: Gained carrier Apr 13 19:22:11.401277 systemd-networkd[780]: Enumeration completed Apr 13 19:22:11.402167 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:22:11.402758 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:11.402762 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:11.403204 systemd[1]: Reached target network.target - Network. Apr 13 19:22:11.404274 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:11.404279 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:11.404993 systemd-networkd[780]: eth0: Link UP Apr 13 19:22:11.404998 systemd-networkd[780]: eth0: Gained carrier Apr 13 19:22:11.405009 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:11.409172 systemd-networkd[780]: eth1: Link UP Apr 13 19:22:11.409176 systemd-networkd[780]: eth1: Gained carrier Apr 13 19:22:11.409186 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:11.412201 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:22:11.427483 ignition[783]: Ignition 2.19.0 Apr 13 19:22:11.427501 ignition[783]: Stage: fetch Apr 13 19:22:11.427758 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:11.427773 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:11.427880 ignition[783]: parsed url from cmdline: "" Apr 13 19:22:11.427883 ignition[783]: no config URL provided Apr 13 19:22:11.427888 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:22:11.427896 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:22:11.427920 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 19:22:11.428553 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 19:22:11.453147 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:22:11.468116 systemd-networkd[780]: eth0: DHCPv4 address 178.105.8.180/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:22:11.628718 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 19:22:11.633600 ignition[783]: GET result: OK Apr 13 19:22:11.634002 ignition[783]: parsing config with SHA512: cfb78ce73b4bc813ebc5af3a4aa38e969bb63c01e4d911bc44a9bad135722d5aecc88635ecb685b263ed51761acacd6784541959bb05e71a3c79f9e422b87521 Apr 13 19:22:11.640907 unknown[783]: fetched base config from "system" Apr 13 19:22:11.640915 unknown[783]: fetched base config from "system" Apr 13 19:22:11.640930 unknown[783]: fetched user config from "hetzner" Apr 13 19:22:11.642727 ignition[783]: fetch: fetch complete Apr 13 19:22:11.644649 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:22:11.642742 ignition[783]: fetch: fetch passed Apr 13 19:22:11.642804 ignition[783]: Ignition finished successfully Apr 13 19:22:11.653340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:22:11.683759 ignition[790]: Ignition 2.19.0 Apr 13 19:22:11.683771 ignition[790]: Stage: kargs Apr 13 19:22:11.683968 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:11.683977 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:11.685099 ignition[790]: kargs: kargs passed Apr 13 19:22:11.685157 ignition[790]: Ignition finished successfully Apr 13 19:22:11.687814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:22:11.697782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:22:11.711405 ignition[797]: Ignition 2.19.0 Apr 13 19:22:11.711426 ignition[797]: Stage: disks Apr 13 19:22:11.714556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:22:11.711646 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:11.711676 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:11.715756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:22:11.713103 ignition[797]: disks: disks passed Apr 13 19:22:11.717120 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:22:11.713172 ignition[797]: Ignition finished successfully Apr 13 19:22:11.718380 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:22:11.719542 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:22:11.720495 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:22:11.729302 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:22:11.745814 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 19:22:11.751118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:22:11.757739 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:22:11.811125 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:22:11.811646 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:22:11.813510 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:22:11.826277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:22:11.829185 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:22:11.833253 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:22:11.836346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:22:11.841625 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:22:11.844286 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:22:11.847573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:22:11.855068 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Apr 13 19:22:11.858957 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:11.859012 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:11.860170 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:11.864510 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:11.864560 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:11.871619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:22:11.898032 coreos-metadata[816]: Apr 13 19:22:11.897 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 19:22:11.899403 coreos-metadata[816]: Apr 13 19:22:11.899 INFO Fetch successful Apr 13 19:22:11.901244 coreos-metadata[816]: Apr 13 19:22:11.901 INFO wrote hostname ci-4081-3-7-f-96a1162b98 to /sysroot/etc/hostname Apr 13 19:22:11.904409 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:22:11.908786 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:22:11.914980 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:22:11.921119 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:22:11.926020 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:22:12.027986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:22:12.032251 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:22:12.035740 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:22:12.045052 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:12.066394 ignition[930]: INFO : Ignition 2.19.0 Apr 13 19:22:12.067188 ignition[930]: INFO : Stage: mount Apr 13 19:22:12.067871 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:12.069373 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:12.071140 ignition[930]: INFO : mount: mount passed Apr 13 19:22:12.071374 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:22:12.073826 ignition[930]: INFO : Ignition finished successfully Apr 13 19:22:12.074701 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:22:12.081270 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:22:12.181221 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:22:12.186377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:22:12.199074 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Apr 13 19:22:12.200381 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:22:12.200430 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:22:12.200441 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:22:12.204060 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 19:22:12.204113 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:22:12.206640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:22:12.233003 ignition[961]: INFO : Ignition 2.19.0 Apr 13 19:22:12.233003 ignition[961]: INFO : Stage: files Apr 13 19:22:12.234207 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:12.234207 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:12.236395 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:22:12.237268 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:22:12.237268 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:22:12.240917 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:22:12.242185 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:22:12.243625 unknown[961]: wrote ssh authorized keys file for user: core Apr 13 19:22:12.244910 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:22:12.246414 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:22:12.247606 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:22:12.247606 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:22:12.247606 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:22:12.315936 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:22:12.456328 systemd-networkd[780]: eth1: Gained IPv6LL Apr 13 19:22:12.544639 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:22:12.546990 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:22:12.546990 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:22:12.776394 systemd-networkd[780]: eth0: Gained IPv6LL Apr 13 19:22:12.878841 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 13 19:22:13.141859 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:22:13.141859 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:22:13.145404 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:22:13.398180 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 13 19:22:14.145305 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:22:14.145305 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:22:14.151180 ignition[961]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:22:14.151180 ignition[961]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:22:14.151180 ignition[961]: INFO : files: files passed Apr 13 19:22:14.151180 ignition[961]: INFO : Ignition finished successfully Apr 13 19:22:14.152344 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:22:14.162557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:22:14.168250 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:22:14.172299 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:22:14.174117 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:22:14.189229 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:14.189229 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:14.192244 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:22:14.194748 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:22:14.195696 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:22:14.200299 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:22:14.234906 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:22:14.235102 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:22:14.237301 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:22:14.238753 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:22:14.239700 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:22:14.245307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:22:14.258127 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:22:14.267306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:22:14.278850 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:14.280397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:14.281154 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:22:14.281712 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:22:14.281836 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:22:14.283361 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:22:14.284057 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:22:14.285139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:22:14.286378 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:22:14.287725 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:22:14.288906 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:22:14.290122 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:22:14.291431 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:22:14.292582 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:22:14.293599 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:22:14.294651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:22:14.294780 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:22:14.296192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:14.296836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:14.297829 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:22:14.298280 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:14.298987 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:22:14.299114 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:22:14.300681 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:22:14.300794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:22:14.302111 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:22:14.302203 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:22:14.303162 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:22:14.303250 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:22:14.310322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:22:14.311718 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:22:14.311856 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:14.315237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:22:14.315770 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:22:14.315892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:14.316997 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:22:14.317493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:22:14.326389 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:22:14.328068 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:22:14.335709 ignition[1014]: INFO : Ignition 2.19.0 Apr 13 19:22:14.335709 ignition[1014]: INFO : Stage: umount Apr 13 19:22:14.343337 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:22:14.343337 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 19:22:14.343337 ignition[1014]: INFO : umount: umount passed Apr 13 19:22:14.343337 ignition[1014]: INFO : Ignition finished successfully Apr 13 19:22:14.338064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:22:14.340575 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:22:14.342083 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:22:14.345134 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:22:14.345185 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:22:14.346495 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:22:14.346533 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:22:14.347549 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:22:14.347587 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:22:14.348187 systemd[1]: Stopped target network.target - Network. Apr 13 19:22:14.349052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:22:14.349093 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:22:14.350936 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:22:14.351877 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:22:14.356101 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:14.361843 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:22:14.362986 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:22:14.364587 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:22:14.364697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:22:14.366829 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:22:14.366871 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:22:14.368768 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:22:14.368823 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:22:14.369753 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:22:14.369792 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:22:14.371569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:22:14.378286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:22:14.384613 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 13 19:22:14.387799 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:22:14.387951 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:22:14.392169 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 13 19:22:14.393713 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:22:14.394074 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:22:14.395224 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:22:14.397132 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:22:14.398501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:22:14.398560 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:14.399369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:22:14.399422 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:22:14.405309 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:22:14.405803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:22:14.405865 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:22:14.407855 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:22:14.407905 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:14.409440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:22:14.409495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:14.410255 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:22:14.410293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:14.411221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:14.429812 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:22:14.430160 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:14.432517 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:22:14.432595 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:14.434412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:22:14.434445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:14.435785 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:22:14.435836 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:22:14.437451 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:22:14.437497 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:22:14.439177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:22:14.439227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:22:14.454112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:22:14.456012 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:22:14.456110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:14.459182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:22:14.459239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:14.462512 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:22:14.462601 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:22:14.467918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:22:14.468070 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:22:14.470247 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:22:14.476221 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:22:14.488079 systemd[1]: Switching root. Apr 13 19:22:14.525255 systemd-journald[237]: Journal stopped Apr 13 19:22:15.433838 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 13 19:22:15.433905 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:22:15.433917 kernel: SELinux: policy capability open_perms=1 Apr 13 19:22:15.433926 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:22:15.433936 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:22:15.433949 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:22:15.433959 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:22:15.433968 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:22:15.433981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:22:15.433992 systemd[1]: Successfully loaded SELinux policy in 34.767ms. Apr 13 19:22:15.434011 kernel: audit: type=1403 audit(1776108134.705:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:22:15.434033 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.663ms. Apr 13 19:22:15.434047 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:22:15.434058 systemd[1]: Detected virtualization kvm. Apr 13 19:22:15.434069 systemd[1]: Detected architecture arm64. Apr 13 19:22:15.434083 systemd[1]: Detected first boot. Apr 13 19:22:15.434095 systemd[1]: Hostname set to . Apr 13 19:22:15.434105 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:22:15.434118 zram_generator::config[1074]: No configuration found. Apr 13 19:22:15.434130 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:22:15.434141 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:22:15.434151 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:22:15.434165 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:22:15.434177 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:22:15.434193 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:22:15.434203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:22:15.434214 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:22:15.434224 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:22:15.434235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:22:15.434245 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:22:15.434256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:22:15.434266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:22:15.434277 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:22:15.434288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:22:15.434299 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:22:15.434309 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:22:15.434320 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:22:15.434330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:22:15.434340 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:22:15.434351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:22:15.434366 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:22:15.434379 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:22:15.434394 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:22:15.434404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:22:15.434415 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:22:15.434425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:22:15.434435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:22:15.434446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:22:15.434456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:22:15.434469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:22:15.434479 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:22:15.434490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:22:15.434500 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:22:15.434510 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:22:15.434521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:22:15.434535 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:22:15.434547 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:22:15.434558 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:22:15.434568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:15.434578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:22:15.434590 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:22:15.434600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:15.434618 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:22:15.434632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:15.434643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:22:15.434654 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:15.434664 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:22:15.434675 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 19:22:15.434686 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 19:22:15.434696 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:22:15.434706 kernel: fuse: init (API version 7.39) Apr 13 19:22:15.434718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:22:15.434729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:22:15.434739 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:22:15.434750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:22:15.434782 systemd-journald[1158]: Collecting audit messages is disabled. Apr 13 19:22:15.434805 systemd-journald[1158]: Journal started Apr 13 19:22:15.434828 systemd-journald[1158]: Runtime Journal (/run/log/journal/ab1f80176308422e870a89f61e84acae) is 8.0M, max 76.6M, 68.6M free. Apr 13 19:22:15.441095 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:22:15.443922 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:22:15.444231 kernel: ACPI: bus type drm_connector registered Apr 13 19:22:15.444946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:22:15.448048 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:22:15.448952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:22:15.452362 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:22:15.453379 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:22:15.454348 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:22:15.456227 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:22:15.456391 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:22:15.457928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:15.459163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:15.460222 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:22:15.460363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:22:15.461208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:15.461344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:15.463194 kernel: loop: module loaded Apr 13 19:22:15.463360 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:22:15.463508 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:22:15.464528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:22:15.465689 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:22:15.467327 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:15.467520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:15.468731 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:22:15.484479 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:22:15.491239 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:22:15.505159 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:22:15.508136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:22:15.511675 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:22:15.521221 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:22:15.522461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:15.528302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:22:15.530164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:22:15.536469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:22:15.539045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:22:15.545836 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:22:15.555910 systemd-journald[1158]: Time spent on flushing to /var/log/journal/ab1f80176308422e870a89f61e84acae is 49.648ms for 1114 entries. Apr 13 19:22:15.555910 systemd-journald[1158]: System Journal (/var/log/journal/ab1f80176308422e870a89f61e84acae) is 8.0M, max 584.8M, 576.8M free. Apr 13 19:22:15.628353 systemd-journald[1158]: Received client request to flush runtime journal. Apr 13 19:22:15.548414 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:22:15.552013 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:22:15.561679 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:22:15.565321 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:22:15.588089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:22:15.619652 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 13 19:22:15.619663 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 13 19:22:15.629434 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:22:15.632495 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:22:15.651068 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:22:15.652350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:22:15.665301 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:22:15.682253 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:22:15.693556 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:22:15.699319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:22:15.715278 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Apr 13 19:22:15.715647 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Apr 13 19:22:15.720329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:22:16.051936 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:22:16.062330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:22:16.084733 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Apr 13 19:22:16.107016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:22:16.122225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:22:16.142188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:22:16.187654 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 13 19:22:16.209932 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:22:16.300050 systemd-networkd[1249]: lo: Link UP Apr 13 19:22:16.300057 systemd-networkd[1249]: lo: Gained carrier Apr 13 19:22:16.301670 systemd-networkd[1249]: Enumeration completed Apr 13 19:22:16.301814 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:22:16.303229 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.303242 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:16.303930 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.303939 systemd-networkd[1249]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:22:16.304431 systemd-networkd[1249]: eth0: Link UP Apr 13 19:22:16.304440 systemd-networkd[1249]: eth0: Gained carrier Apr 13 19:22:16.304453 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.314201 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:22:16.315996 systemd-networkd[1249]: eth1: Link UP Apr 13 19:22:16.316010 systemd-networkd[1249]: eth1: Gained carrier Apr 13 19:22:16.316063 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.320482 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.332064 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:22:16.350575 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 19:22:16.350665 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 13 19:22:16.350842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:16.363096 systemd-networkd[1249]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 19:22:16.365500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:16.370104 systemd-networkd[1249]: eth0: DHCPv4 address 178.105.8.180/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 19:22:16.371529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:16.371671 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:22:16.374182 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 13 19:22:16.374255 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 19:22:16.374275 kernel: [drm] features: -context_init Apr 13 19:22:16.375109 kernel: [drm] number of scanouts: 1 Apr 13 19:22:16.375172 kernel: [drm] number of cap sets: 0 Apr 13 19:22:16.377247 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 19:22:16.385212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:16.387078 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 19:22:16.387784 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:22:16.387830 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:22:16.389482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:16.389901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:16.399746 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 19:22:16.446106 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1255) Apr 13 19:22:16.463163 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:16.463334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:16.470146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:16.472663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:16.519434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 19:22:16.520488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:16.520672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:22:16.527224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:22:16.585071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:22:16.610993 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:22:16.624751 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:22:16.641297 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:22:16.671790 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:22:16.674037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:22:16.681393 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:22:16.685670 lvm[1310]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:22:16.713214 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:22:16.715905 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:22:16.716846 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:22:16.716958 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:22:16.717646 systemd[1]: Reached target machines.target - Containers. Apr 13 19:22:16.719725 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:22:16.725329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:22:16.729265 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:22:16.730734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:16.735122 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:22:16.744238 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:22:16.750674 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:22:16.752841 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:22:16.770471 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:22:16.782520 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:22:16.784471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:22:16.792232 kernel: loop0: detected capacity change from 0 to 8 Apr 13 19:22:16.799164 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:22:16.818093 kernel: loop1: detected capacity change from 0 to 114328 Apr 13 19:22:16.846054 kernel: loop2: detected capacity change from 0 to 114432 Apr 13 19:22:16.874069 kernel: loop3: detected capacity change from 0 to 209336 Apr 13 19:22:16.917243 kernel: loop4: detected capacity change from 0 to 8 Apr 13 19:22:16.921328 kernel: loop5: detected capacity change from 0 to 114328 Apr 13 19:22:16.935525 kernel: loop6: detected capacity change from 0 to 114432 Apr 13 19:22:16.949059 kernel: loop7: detected capacity change from 0 to 209336 Apr 13 19:22:16.960670 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 19:22:16.961161 (sd-merge)[1331]: Merged extensions into '/usr'. Apr 13 19:22:16.966281 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:22:16.966448 systemd[1]: Reloading... Apr 13 19:22:17.051840 zram_generator::config[1363]: No configuration found. Apr 13 19:22:17.132089 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:22:17.166094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:22:17.225825 systemd[1]: Reloading finished in 258 ms. Apr 13 19:22:17.246941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:22:17.248666 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:22:17.260304 systemd[1]: Starting ensure-sysext.service... Apr 13 19:22:17.267301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:22:17.272321 systemd[1]: Reloading requested from client PID 1404 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:22:17.272340 systemd[1]: Reloading... Apr 13 19:22:17.305259 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:22:17.305559 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:22:17.306376 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:22:17.306602 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 13 19:22:17.306657 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Apr 13 19:22:17.310550 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:22:17.310568 systemd-tmpfiles[1405]: Skipping /boot Apr 13 19:22:17.318971 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:22:17.318990 systemd-tmpfiles[1405]: Skipping /boot Apr 13 19:22:17.357060 zram_generator::config[1434]: No configuration found. Apr 13 19:22:17.470347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:22:17.531141 systemd[1]: Reloading finished in 258 ms. Apr 13 19:22:17.551852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:22:17.570324 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:22:17.576285 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:22:17.580267 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:22:17.589956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:22:17.602226 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:22:17.609323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:17.614768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:17.622277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:17.635329 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:17.637293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:17.643230 systemd-networkd[1249]: eth1: Gained IPv6LL Apr 13 19:22:17.651764 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:22:17.654752 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:22:17.656093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:17.656248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:17.670199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:17.670386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:17.673760 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:17.674895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:17.680362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:17.695430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:22:17.698272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:17.698429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:17.705374 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:22:17.709642 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:22:17.727297 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:22:17.731350 systemd-resolved[1486]: Positive Trust Anchors: Apr 13 19:22:17.731374 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:22:17.731407 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:22:17.731991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:22:17.733351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:22:17.739085 augenrules[1516]: No rules Apr 13 19:22:17.738059 systemd-resolved[1486]: Using system hostname 'ci-4081-3-7-f-96a1162b98'. Apr 13 19:22:17.741398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:22:17.743177 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:22:17.746923 systemd[1]: Reached target network.target - Network. Apr 13 19:22:17.748175 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:22:17.750111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:22:17.751122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:22:17.762394 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:22:17.767379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:22:17.772917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:22:17.774527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:22:17.775066 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:22:17.776050 systemd[1]: Finished ensure-sysext.service. Apr 13 19:22:17.777587 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:22:17.778701 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:22:17.778949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:22:17.781873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:22:17.782221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:22:17.786598 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:22:17.789297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:22:17.793372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:22:17.793706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:22:17.802325 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 19:22:17.848128 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 19:22:17.850294 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:22:17.851316 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:22:17.852399 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:22:17.853520 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:22:17.854569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:22:17.854628 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:22:17.855352 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:22:17.856763 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:22:17.858802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:22:17.860785 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:22:17.863773 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:22:17.866115 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:22:17.869090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:22:17.871588 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:22:17.872371 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:22:17.872930 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:22:17.873943 systemd[1]: System is tainted: cgroupsv1 Apr 13 19:22:17.873996 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:22:17.874020 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:22:17.878222 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:22:17.881304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:22:17.889434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:22:17.896917 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:22:17.901942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:22:17.902715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:22:17.907443 systemd-timesyncd[1543]: Contacted time server 77.90.0.148:123 (0.flatcar.pool.ntp.org). Apr 13 19:22:17.909761 systemd-timesyncd[1543]: Initial clock synchronization to Mon 2026-04-13 19:22:17.648458 UTC. Apr 13 19:22:17.913463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:22:17.915632 jq[1551]: false Apr 13 19:22:17.927276 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:22:17.937233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:22:17.939394 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:22:17.949333 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 19:22:17.952986 dbus-daemon[1549]: [system] SELinux support is enabled Apr 13 19:22:17.955191 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:22:17.965520 extend-filesystems[1552]: Found loop4 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found loop5 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found loop6 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found loop7 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda1 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda2 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda3 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found usr Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda4 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda6 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda7 Apr 13 19:22:17.965520 extend-filesystems[1552]: Found sda9 Apr 13 19:22:17.965520 extend-filesystems[1552]: Checking size of /dev/sda9 Apr 13 19:22:17.980827 coreos-metadata[1548]: Apr 13 19:22:17.969 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 19:22:17.980827 coreos-metadata[1548]: Apr 13 19:22:17.978 INFO Fetch successful Apr 13 19:22:17.980827 coreos-metadata[1548]: Apr 13 19:22:17.979 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 19:22:17.980827 coreos-metadata[1548]: Apr 13 19:22:17.980 INFO Fetch successful Apr 13 19:22:17.967878 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:22:17.985428 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:22:17.991208 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:22:17.994405 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:22:18.004412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:22:18.005799 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:22:18.021448 extend-filesystems[1552]: Resized partition /dev/sda9 Apr 13 19:22:18.025319 systemd-networkd[1249]: eth0: Gained IPv6LL Apr 13 19:22:18.030114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:22:18.030339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:22:18.033096 extend-filesystems[1587]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:22:18.036562 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:22:18.036788 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:22:18.043040 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 13 19:22:18.043585 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:22:18.044066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:22:18.057771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:22:18.057810 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:22:18.062725 jq[1579]: true Apr 13 19:22:18.061278 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:22:18.061299 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:22:18.076017 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:22:18.098696 tar[1592]: linux-arm64/LICENSE Apr 13 19:22:18.119207 tar[1592]: linux-arm64/helm Apr 13 19:22:18.125592 jq[1609]: true Apr 13 19:22:18.120440 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:22:18.203309 update_engine[1575]: I20260413 19:22:18.195320 1575 main.cc:92] Flatcar Update Engine starting Apr 13 19:22:18.217059 update_engine[1575]: I20260413 19:22:18.215072 1575 update_check_scheduler.cc:74] Next update check in 10m36s Apr 13 19:22:18.214500 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:22:18.216185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:22:18.217705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:22:18.220276 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:22:18.221902 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:22:18.236041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1253) Apr 13 19:22:18.251604 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 13 19:22:18.256347 extend-filesystems[1587]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 19:22:18.256347 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 13 19:22:18.256347 extend-filesystems[1587]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 13 19:22:18.266952 extend-filesystems[1552]: Resized filesystem in /dev/sda9 Apr 13 19:22:18.266952 extend-filesystems[1552]: Found sr0 Apr 13 19:22:18.270468 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:22:18.271373 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:22:18.316053 systemd-logind[1571]: New seat seat0. Apr 13 19:22:18.319525 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:22:18.321644 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:22:18.319546 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 13 19:22:18.319848 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:22:18.331284 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:22:18.350669 systemd[1]: Starting sshkeys.service... Apr 13 19:22:18.380404 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:22:18.394718 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:22:18.449062 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:22:18.451874 coreos-metadata[1659]: Apr 13 19:22:18.451 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 19:22:18.453235 containerd[1605]: time="2026-04-13T19:22:18.453133001Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:22:18.457181 coreos-metadata[1659]: Apr 13 19:22:18.454 INFO Fetch successful Apr 13 19:22:18.461127 unknown[1659]: wrote ssh authorized keys file for user: core Apr 13 19:22:18.497680 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:22:18.500496 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:22:18.512280 systemd[1]: Finished sshkeys.service. Apr 13 19:22:18.528427 containerd[1605]: time="2026-04-13T19:22:18.528368575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.531549 containerd[1605]: time="2026-04-13T19:22:18.531493674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:18.531549 containerd[1605]: time="2026-04-13T19:22:18.531542133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:22:18.531675 containerd[1605]: time="2026-04-13T19:22:18.531562105Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:22:18.531759 containerd[1605]: time="2026-04-13T19:22:18.531734346Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:22:18.531784 containerd[1605]: time="2026-04-13T19:22:18.531756911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.531859 containerd[1605]: time="2026-04-13T19:22:18.531839316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:18.531859 containerd[1605]: time="2026-04-13T19:22:18.531857043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532105 containerd[1605]: time="2026-04-13T19:22:18.532084787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532131 containerd[1605]: time="2026-04-13T19:22:18.532104605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532131 containerd[1605]: time="2026-04-13T19:22:18.532117803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532131 containerd[1605]: time="2026-04-13T19:22:18.532126551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532213 containerd[1605]: time="2026-04-13T19:22:18.532196337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.532405 containerd[1605]: time="2026-04-13T19:22:18.532387040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:22:18.533711 containerd[1605]: time="2026-04-13T19:22:18.533671837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:22:18.533711 containerd[1605]: time="2026-04-13T19:22:18.533705705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:22:18.533886 containerd[1605]: time="2026-04-13T19:22:18.533864360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:22:18.533931 containerd[1605]: time="2026-04-13T19:22:18.533914832Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.539662281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.540159804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.540181557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.540259781Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.540281727Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:22:18.541033 containerd[1605]: time="2026-04-13T19:22:18.540505756Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541597412Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541779290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541797211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541863707Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541883757Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541897652Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542132 containerd[1605]: time="2026-04-13T19:22:18.541910580Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542282 containerd[1605]: time="2026-04-13T19:22:18.542199402Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542282 containerd[1605]: time="2026-04-13T19:22:18.542221851Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542282 containerd[1605]: time="2026-04-13T19:22:18.542235282Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542282 containerd[1605]: time="2026-04-13T19:22:18.542248868Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542282 containerd[1605]: time="2026-04-13T19:22:18.542270852Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542293998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542309055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542321054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542339865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542352328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542364869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542376945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542389369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542478431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542501151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542614 containerd[1605]: time="2026-04-13T19:22:18.542515356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542811 containerd[1605]: time="2026-04-13T19:22:18.542530064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542811 containerd[1605]: time="2026-04-13T19:22:18.542769498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542811 containerd[1605]: time="2026-04-13T19:22:18.542786645Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:22:18.542811 containerd[1605]: time="2026-04-13T19:22:18.542809210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542879 containerd[1605]: time="2026-04-13T19:22:18.542830266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.542879 containerd[1605]: time="2026-04-13T19:22:18.542844510Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543069970Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543103335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543462562Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543481606Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543492521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.543813 containerd[1605]: time="2026-04-13T19:22:18.543511873Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:22:18.543948 containerd[1605]: time="2026-04-13T19:22:18.543521898Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:22:18.543948 containerd[1605]: time="2026-04-13T19:22:18.543905859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:22:18.544719 containerd[1605]: time="2026-04-13T19:22:18.544638868Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:22:18.544842 containerd[1605]: time="2026-04-13T19:22:18.544723169Z" level=info msg="Connect containerd service" Apr 13 19:22:18.544842 containerd[1605]: time="2026-04-13T19:22:18.544771009Z" level=info msg="using legacy CRI server" Apr 13 19:22:18.544842 containerd[1605]: time="2026-04-13T19:22:18.544778286Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:22:18.547029 containerd[1605]: time="2026-04-13T19:22:18.545045471Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:22:18.547029 containerd[1605]: time="2026-04-13T19:22:18.546511682Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:22:18.547029 containerd[1605]: time="2026-04-13T19:22:18.546933110Z" level=info msg="Start subscribing containerd event" Apr 13 19:22:18.547029 containerd[1605]: time="2026-04-13T19:22:18.546977312Z" level=info msg="Start recovering state" Apr 13 19:22:18.547138 containerd[1605]: time="2026-04-13T19:22:18.547051897Z" level=info msg="Start event monitor" Apr 13 19:22:18.547138 containerd[1605]: time="2026-04-13T19:22:18.547064206Z" level=info msg="Start snapshots syncer" Apr 13 19:22:18.547138 containerd[1605]: time="2026-04-13T19:22:18.547073495Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:22:18.547138 containerd[1605]: time="2026-04-13T19:22:18.547086423Z" level=info msg="Start streaming server" Apr 13 19:22:18.548720 containerd[1605]: time="2026-04-13T19:22:18.548693136Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:22:18.548772 containerd[1605]: time="2026-04-13T19:22:18.548741324Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:22:18.549685 containerd[1605]: time="2026-04-13T19:22:18.548791216Z" level=info msg="containerd successfully booted in 0.099580s" Apr 13 19:22:18.548906 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:22:18.836968 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:22:18.890420 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:22:18.896777 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:22:18.909253 tar[1592]: linux-arm64/README.md Apr 13 19:22:18.924107 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:22:18.925153 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:22:18.929803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:22:18.936516 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:22:18.962790 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:22:18.970452 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:22:18.975574 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:22:18.977482 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:22:19.245261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:22:19.247658 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:22:19.250160 systemd[1]: Startup finished in 6.789s (kernel) + 4.579s (userspace) = 11.369s. Apr 13 19:22:19.255790 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:22:19.756915 kubelet[1707]: E0413 19:22:19.756838 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:22:19.764354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:22:19.764554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:22:30.015385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:22:30.023357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:22:30.164394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:22:30.166982 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:22:30.211785 kubelet[1731]: E0413 19:22:30.211667 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:22:30.219259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:22:30.219436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:22:40.470324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:22:40.481430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:22:40.608314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:22:40.612331 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:22:40.655149 kubelet[1751]: E0413 19:22:40.655100 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:22:40.659279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:22:40.659458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:22:50.909985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:22:50.924688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:22:51.118270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:22:51.122741 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:22:51.167301 kubelet[1770]: E0413 19:22:51.167144 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:22:51.172769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:22:51.173166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:22:58.626431 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:22:58.633470 systemd[1]: Started sshd@0-178.105.8.180:22-50.85.169.122:49906.service - OpenSSH per-connection server daemon (50.85.169.122:49906). Apr 13 19:22:58.767067 sshd[1779]: Accepted publickey for core from 50.85.169.122 port 49906 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:58.769369 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:58.779692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:22:58.785509 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:22:58.789056 systemd-logind[1571]: New session 1 of user core. Apr 13 19:22:58.813706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:22:58.825570 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:22:58.830494 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:22:58.938371 systemd[1785]: Queued start job for default target default.target. Apr 13 19:22:58.939516 systemd[1785]: Created slice app.slice - User Application Slice. Apr 13 19:22:58.939785 systemd[1785]: Reached target paths.target - Paths. Apr 13 19:22:58.939806 systemd[1785]: Reached target timers.target - Timers. Apr 13 19:22:58.952227 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:22:58.965971 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:22:58.966063 systemd[1785]: Reached target sockets.target - Sockets. Apr 13 19:22:58.966076 systemd[1785]: Reached target basic.target - Basic System. Apr 13 19:22:58.966125 systemd[1785]: Reached target default.target - Main User Target. Apr 13 19:22:58.966153 systemd[1785]: Startup finished in 128ms. Apr 13 19:22:58.966624 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:22:58.973387 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:22:59.097614 systemd[1]: Started sshd@1-178.105.8.180:22-50.85.169.122:49908.service - OpenSSH per-connection server daemon (50.85.169.122:49908). Apr 13 19:22:59.220328 sshd[1797]: Accepted publickey for core from 50.85.169.122 port 49908 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:59.222663 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:59.229147 systemd-logind[1571]: New session 2 of user core. Apr 13 19:22:59.234422 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:22:59.341171 sshd[1797]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:59.346524 systemd[1]: sshd@1-178.105.8.180:22-50.85.169.122:49908.service: Deactivated successfully. Apr 13 19:22:59.348876 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:22:59.349983 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:22:59.350995 systemd-logind[1571]: Removed session 2. Apr 13 19:22:59.368397 systemd[1]: Started sshd@2-178.105.8.180:22-50.85.169.122:49922.service - OpenSSH per-connection server daemon (50.85.169.122:49922). Apr 13 19:22:59.484264 sshd[1805]: Accepted publickey for core from 50.85.169.122 port 49922 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:59.485725 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:59.490936 systemd-logind[1571]: New session 3 of user core. Apr 13 19:22:59.501618 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:22:59.597532 sshd[1805]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:59.601511 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:22:59.603393 systemd[1]: sshd@2-178.105.8.180:22-50.85.169.122:49922.service: Deactivated successfully. Apr 13 19:22:59.605609 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:22:59.606579 systemd-logind[1571]: Removed session 3. Apr 13 19:22:59.621356 systemd[1]: Started sshd@3-178.105.8.180:22-50.85.169.122:60542.service - OpenSSH per-connection server daemon (50.85.169.122:60542). Apr 13 19:22:59.745870 sshd[1813]: Accepted publickey for core from 50.85.169.122 port 60542 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:22:59.749174 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:22:59.755131 systemd-logind[1571]: New session 4 of user core. Apr 13 19:22:59.764640 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:22:59.871195 sshd[1813]: pam_unix(sshd:session): session closed for user core Apr 13 19:22:59.876457 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:22:59.877597 systemd[1]: sshd@3-178.105.8.180:22-50.85.169.122:60542.service: Deactivated successfully. Apr 13 19:22:59.882356 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:22:59.886947 systemd-logind[1571]: Removed session 4. Apr 13 19:22:59.894700 systemd[1]: Started sshd@4-178.105.8.180:22-50.85.169.122:60554.service - OpenSSH per-connection server daemon (50.85.169.122:60554). Apr 13 19:23:00.007181 sshd[1821]: Accepted publickey for core from 50.85.169.122 port 60554 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:00.009533 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:00.016001 systemd-logind[1571]: New session 5 of user core. Apr 13 19:23:00.022637 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:00.122145 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:00.122915 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:00.140428 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:00.157277 sshd[1821]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:00.163316 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:00.163610 systemd[1]: sshd@4-178.105.8.180:22-50.85.169.122:60554.service: Deactivated successfully. Apr 13 19:23:00.168410 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:00.170405 systemd-logind[1571]: Removed session 5. Apr 13 19:23:00.181416 systemd[1]: Started sshd@5-178.105.8.180:22-50.85.169.122:60556.service - OpenSSH per-connection server daemon (50.85.169.122:60556). Apr 13 19:23:00.322840 sshd[1830]: Accepted publickey for core from 50.85.169.122 port 60556 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:00.324134 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:00.334139 systemd-logind[1571]: New session 6 of user core. Apr 13 19:23:00.339595 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:00.425650 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:00.426498 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:00.431172 sudo[1835]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:00.440479 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:00.440808 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:00.462265 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:00.463780 auditctl[1838]: No rules Apr 13 19:23:00.464437 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:00.464675 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:00.470843 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:00.508545 augenrules[1857]: No rules Apr 13 19:23:00.511490 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:00.513850 sudo[1834]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:00.531655 sshd[1830]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:00.537557 systemd[1]: sshd@5-178.105.8.180:22-50.85.169.122:60556.service: Deactivated successfully. Apr 13 19:23:00.542171 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:00.543497 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:00.545475 systemd-logind[1571]: Removed session 6. Apr 13 19:23:00.558684 systemd[1]: Started sshd@6-178.105.8.180:22-50.85.169.122:60560.service - OpenSSH per-connection server daemon (50.85.169.122:60560). Apr 13 19:23:00.678086 sshd[1866]: Accepted publickey for core from 50.85.169.122 port 60560 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:23:00.681761 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:00.687476 systemd-logind[1571]: New session 7 of user core. Apr 13 19:23:00.696535 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:00.783964 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:00.784870 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:01.095683 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:01.097830 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:01.347268 dockerd[1886]: time="2026-04-13T19:23:01.346888708Z" level=info msg="Starting up" Apr 13 19:23:01.352126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:23:01.363308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:01.506133 dockerd[1886]: time="2026-04-13T19:23:01.504664194Z" level=info msg="Loading containers: start." Apr 13 19:23:01.571335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:01.579772 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:01.643109 kubelet[1924]: E0413 19:23:01.642921 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:01.649594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:01.649969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:01.665079 kernel: Initializing XFRM netlink socket Apr 13 19:23:01.755067 systemd-networkd[1249]: docker0: Link UP Apr 13 19:23:01.781432 dockerd[1886]: time="2026-04-13T19:23:01.780561379Z" level=info msg="Loading containers: done." Apr 13 19:23:01.799060 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2370265725-merged.mount: Deactivated successfully. Apr 13 19:23:01.801746 dockerd[1886]: time="2026-04-13T19:23:01.801676526Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:01.801902 dockerd[1886]: time="2026-04-13T19:23:01.801861851Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:01.802047 dockerd[1886]: time="2026-04-13T19:23:01.801999094Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:01.844588 dockerd[1886]: time="2026-04-13T19:23:01.844363431Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:01.845524 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:02.338776 containerd[1605]: time="2026-04-13T19:23:02.338724192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:23:02.945926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206011860.mount: Deactivated successfully. Apr 13 19:23:03.882931 update_engine[1575]: I20260413 19:23:03.882834 1575 update_attempter.cc:509] Updating boot flags... Apr 13 19:23:03.935598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2108) Apr 13 19:23:04.040061 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1920) Apr 13 19:23:04.125099 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1920) Apr 13 19:23:04.291042 containerd[1605]: time="2026-04-13T19:23:04.289762777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:04.291429 containerd[1605]: time="2026-04-13T19:23:04.291370253Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283781" Apr 13 19:23:04.294174 containerd[1605]: time="2026-04-13T19:23:04.292388276Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:04.297136 containerd[1605]: time="2026-04-13T19:23:04.297083500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:04.298503 containerd[1605]: time="2026-04-13T19:23:04.298456810Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 1.959669857s" Apr 13 19:23:04.298503 containerd[1605]: time="2026-04-13T19:23:04.298504411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:23:04.299356 containerd[1605]: time="2026-04-13T19:23:04.299323390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:23:05.565672 containerd[1605]: time="2026-04-13T19:23:05.565605568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:05.568385 containerd[1605]: time="2026-04-13T19:23:05.567741773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551922" Apr 13 19:23:05.571063 containerd[1605]: time="2026-04-13T19:23:05.569096642Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:05.573124 containerd[1605]: time="2026-04-13T19:23:05.573082166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:05.574470 containerd[1605]: time="2026-04-13T19:23:05.574425715Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.275062084s" Apr 13 19:23:05.574470 containerd[1605]: time="2026-04-13T19:23:05.574470556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:23:05.575094 containerd[1605]: time="2026-04-13T19:23:05.574959926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:23:06.703914 containerd[1605]: time="2026-04-13T19:23:06.702740262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:06.703914 containerd[1605]: time="2026-04-13T19:23:06.703868165Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301253" Apr 13 19:23:06.704572 containerd[1605]: time="2026-04-13T19:23:06.704531098Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:06.711367 containerd[1605]: time="2026-04-13T19:23:06.711314554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:06.713831 containerd[1605]: time="2026-04-13T19:23:06.713768684Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.138773237s" Apr 13 19:23:06.713831 containerd[1605]: time="2026-04-13T19:23:06.713822165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:23:06.715141 containerd[1605]: time="2026-04-13T19:23:06.715112231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:23:07.612469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760098565.mount: Deactivated successfully. Apr 13 19:23:07.972456 containerd[1605]: time="2026-04-13T19:23:07.972261980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:07.973948 containerd[1605]: time="2026-04-13T19:23:07.973872211Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148979" Apr 13 19:23:07.976059 containerd[1605]: time="2026-04-13T19:23:07.974935472Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:07.977323 containerd[1605]: time="2026-04-13T19:23:07.977253076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:07.978056 containerd[1605]: time="2026-04-13T19:23:07.977847488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 1.262697936s" Apr 13 19:23:07.978056 containerd[1605]: time="2026-04-13T19:23:07.977915609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:23:07.978879 containerd[1605]: time="2026-04-13T19:23:07.978520980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:23:08.523354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324367236.mount: Deactivated successfully. Apr 13 19:23:09.502530 containerd[1605]: time="2026-04-13T19:23:09.502471106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:09.503953 containerd[1605]: time="2026-04-13T19:23:09.503893210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 13 19:23:09.505432 containerd[1605]: time="2026-04-13T19:23:09.505345036Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:09.508611 containerd[1605]: time="2026-04-13T19:23:09.508569772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:09.510484 containerd[1605]: time="2026-04-13T19:23:09.510304962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.531748261s" Apr 13 19:23:09.510484 containerd[1605]: time="2026-04-13T19:23:09.510353323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:23:09.511397 containerd[1605]: time="2026-04-13T19:23:09.511373741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:23:10.037859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042296016.mount: Deactivated successfully. Apr 13 19:23:10.047740 containerd[1605]: time="2026-04-13T19:23:10.047649272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:10.051281 containerd[1605]: time="2026-04-13T19:23:10.051192171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 13 19:23:10.052678 containerd[1605]: time="2026-04-13T19:23:10.052588675Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:10.055544 containerd[1605]: time="2026-04-13T19:23:10.055458563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:10.056943 containerd[1605]: time="2026-04-13T19:23:10.056370098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 544.962197ms" Apr 13 19:23:10.056943 containerd[1605]: time="2026-04-13T19:23:10.056408698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:23:10.057212 containerd[1605]: time="2026-04-13T19:23:10.057182151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:23:10.582150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636829092.mount: Deactivated successfully. Apr 13 19:23:11.369490 containerd[1605]: time="2026-04-13T19:23:11.369349733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:11.371709 containerd[1605]: time="2026-04-13T19:23:11.371672930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Apr 13 19:23:11.372913 containerd[1605]: time="2026-04-13T19:23:11.372886189Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:11.375964 containerd[1605]: time="2026-04-13T19:23:11.375919758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:11.377606 containerd[1605]: time="2026-04-13T19:23:11.377563664Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.320347272s" Apr 13 19:23:11.377606 containerd[1605]: time="2026-04-13T19:23:11.377603304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:23:11.730357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 19:23:11.738437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:11.880323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:11.884245 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:11.933244 kubelet[2288]: E0413 19:23:11.933192 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:11.937858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:11.940181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:17.282013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:17.292507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:17.330546 systemd[1]: Reloading requested from client PID 2304 ('systemctl') (unit session-7.scope)... Apr 13 19:23:17.330708 systemd[1]: Reloading... Apr 13 19:23:17.452072 zram_generator::config[2345]: No configuration found. Apr 13 19:23:17.561390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:17.631955 systemd[1]: Reloading finished in 300 ms. Apr 13 19:23:17.682610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:23:17.682691 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:23:17.683071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:17.700598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:17.828255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:17.840484 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:23:17.885111 kubelet[2404]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:17.885111 kubelet[2404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:23:17.885111 kubelet[2404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:17.885581 kubelet[2404]: I0413 19:23:17.885156 2404 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:23:19.261082 kubelet[2404]: I0413 19:23:19.260904 2404 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:23:19.261082 kubelet[2404]: I0413 19:23:19.260957 2404 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:23:19.263044 kubelet[2404]: I0413 19:23:19.262120 2404 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:23:19.288392 kubelet[2404]: E0413 19:23:19.288355 2404 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.8.180:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:23:19.291333 kubelet[2404]: I0413 19:23:19.291306 2404 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:23:19.305536 kubelet[2404]: E0413 19:23:19.305482 2404 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:23:19.305536 kubelet[2404]: I0413 19:23:19.305530 2404 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:23:19.309830 kubelet[2404]: I0413 19:23:19.309781 2404 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:23:19.311489 kubelet[2404]: I0413 19:23:19.311432 2404 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:23:19.311665 kubelet[2404]: I0413 19:23:19.311484 2404 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-f-96a1162b98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:23:19.311665 kubelet[2404]: I0413 19:23:19.311661 2404 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:23:19.311665 kubelet[2404]: I0413 19:23:19.311670 2404 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:23:19.311937 kubelet[2404]: I0413 19:23:19.311889 2404 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:23:19.319395 kubelet[2404]: I0413 19:23:19.319257 2404 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:23:19.319575 kubelet[2404]: I0413 19:23:19.319426 2404 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:23:19.319575 kubelet[2404]: I0413 19:23:19.319460 2404 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:23:19.319575 kubelet[2404]: I0413 19:23:19.319474 2404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:23:19.329733 kubelet[2404]: E0413 19:23:19.328650 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.8.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-f-96a1162b98&limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:23:19.331414 kubelet[2404]: E0413 19:23:19.331371 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.8.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:23:19.331860 kubelet[2404]: I0413 19:23:19.331836 2404 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:23:19.333206 kubelet[2404]: I0413 19:23:19.333185 2404 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:23:19.333466 kubelet[2404]: W0413 19:23:19.333451 2404 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:23:19.337407 kubelet[2404]: I0413 19:23:19.337381 2404 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:23:19.337802 kubelet[2404]: I0413 19:23:19.337574 2404 server.go:1289] "Started kubelet" Apr 13 19:23:19.337937 kubelet[2404]: I0413 19:23:19.337902 2404 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:23:19.339748 kubelet[2404]: I0413 19:23:19.339722 2404 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:23:19.342070 kubelet[2404]: I0413 19:23:19.341425 2404 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:23:19.342070 kubelet[2404]: I0413 19:23:19.341784 2404 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:23:19.343257 kubelet[2404]: E0413 19:23:19.341941 2404 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.8.180:6443/api/v1/namespaces/default/events\": dial tcp 178.105.8.180:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-f-96a1162b98.18a600ff402c1666 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-f-96a1162b98,UID:ci-4081-3-7-f-96a1162b98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-f-96a1162b98,},FirstTimestamp:2026-04-13 19:23:19.337530982 +0000 UTC m=+1.493196554,LastTimestamp:2026-04-13 19:23:19.337530982 +0000 UTC m=+1.493196554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-f-96a1162b98,}" Apr 13 19:23:19.347475 kubelet[2404]: E0413 19:23:19.347437 2404 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:23:19.348339 kubelet[2404]: I0413 19:23:19.348305 2404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:23:19.349858 kubelet[2404]: I0413 19:23:19.349819 2404 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:23:19.349954 kubelet[2404]: I0413 19:23:19.348398 2404 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:23:19.350432 kubelet[2404]: I0413 19:23:19.350276 2404 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:23:19.350432 kubelet[2404]: I0413 19:23:19.350338 2404 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:23:19.352184 kubelet[2404]: E0413 19:23:19.351366 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.8.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:23:19.352184 kubelet[2404]: E0413 19:23:19.351923 2404 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-f-96a1162b98\" not found" Apr 13 19:23:19.352184 kubelet[2404]: E0413 19:23:19.352050 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.8.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-f-96a1162b98?timeout=10s\": dial tcp 178.105.8.180:6443: connect: connection refused" interval="200ms" Apr 13 19:23:19.352714 kubelet[2404]: I0413 19:23:19.352681 2404 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:23:19.354131 kubelet[2404]: I0413 19:23:19.354105 2404 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:23:19.354131 kubelet[2404]: I0413 19:23:19.354124 2404 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:23:19.388906 kubelet[2404]: I0413 19:23:19.388845 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:23:19.391483 kubelet[2404]: I0413 19:23:19.391177 2404 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:23:19.391483 kubelet[2404]: I0413 19:23:19.391215 2404 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:23:19.391483 kubelet[2404]: I0413 19:23:19.391240 2404 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:23:19.391483 kubelet[2404]: I0413 19:23:19.391247 2404 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:23:19.391483 kubelet[2404]: E0413 19:23:19.391291 2404 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:23:19.396187 kubelet[2404]: E0413 19:23:19.396156 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.8.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:23:19.396703 kubelet[2404]: I0413 19:23:19.396601 2404 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:23:19.396703 kubelet[2404]: I0413 19:23:19.396631 2404 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:23:19.396703 kubelet[2404]: I0413 19:23:19.396652 2404 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:23:19.399069 kubelet[2404]: I0413 19:23:19.399017 2404 policy_none.go:49] "None policy: Start" Apr 13 19:23:19.399069 kubelet[2404]: I0413 19:23:19.399061 2404 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:23:19.399069 kubelet[2404]: I0413 19:23:19.399074 2404 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:23:19.403803 kubelet[2404]: E0413 19:23:19.403750 2404 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:23:19.404047 kubelet[2404]: I0413 19:23:19.404005 2404 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:23:19.404202 kubelet[2404]: I0413 19:23:19.404142 2404 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:23:19.405876 kubelet[2404]: I0413 19:23:19.405829 2404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:23:19.409188 kubelet[2404]: E0413 19:23:19.409134 2404 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:23:19.409188 kubelet[2404]: E0413 19:23:19.409195 2404 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-f-96a1162b98\" not found" Apr 13 19:23:19.504943 kubelet[2404]: E0413 19:23:19.504860 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.506353 kubelet[2404]: I0413 19:23:19.506105 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.506671 kubelet[2404]: E0413 19:23:19.506616 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.8.180:6443/api/v1/nodes\": dial tcp 178.105.8.180:6443: connect: connection refused" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.509829 kubelet[2404]: E0413 19:23:19.509805 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.513629 kubelet[2404]: E0413 19:23:19.512702 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.551723 kubelet[2404]: I0413 19:23:19.551424 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.551723 kubelet[2404]: I0413 19:23:19.551477 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.551723 kubelet[2404]: I0413 19:23:19.551505 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.551723 kubelet[2404]: I0413 19:23:19.551525 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.551723 kubelet[2404]: I0413 19:23:19.551554 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.552062 kubelet[2404]: I0413 19:23:19.551575 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fce2c2451e945777c3b07a57e2beb31-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-f-96a1162b98\" (UID: \"7fce2c2451e945777c3b07a57e2beb31\") " pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.552062 kubelet[2404]: I0413 19:23:19.551593 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.552062 kubelet[2404]: I0413 19:23:19.551613 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.552062 kubelet[2404]: I0413 19:23:19.551633 2404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.552747 kubelet[2404]: E0413 19:23:19.552708 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.8.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-f-96a1162b98?timeout=10s\": dial tcp 178.105.8.180:6443: connect: connection refused" interval="400ms" Apr 13 19:23:19.710548 kubelet[2404]: I0413 19:23:19.710058 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.710741 kubelet[2404]: E0413 19:23:19.710604 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.8.180:6443/api/v1/nodes\": dial tcp 178.105.8.180:6443: connect: connection refused" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:19.807736 containerd[1605]: time="2026-04-13T19:23:19.807235037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-f-96a1162b98,Uid:edb786c7338f18bcd8f5554ef83c5ec4,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:19.811907 containerd[1605]: time="2026-04-13T19:23:19.811561687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-f-96a1162b98,Uid:89d3f498092d1c29f48c85e862629eb1,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:19.814969 containerd[1605]: time="2026-04-13T19:23:19.814918606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-f-96a1162b98,Uid:7fce2c2451e945777c3b07a57e2beb31,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:19.953850 kubelet[2404]: E0413 19:23:19.953761 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.8.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-f-96a1162b98?timeout=10s\": dial tcp 178.105.8.180:6443: connect: connection refused" interval="800ms" Apr 13 19:23:20.113971 kubelet[2404]: I0413 19:23:20.113682 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:20.114219 kubelet[2404]: E0413 19:23:20.114094 2404 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.8.180:6443/api/v1/nodes\": dial tcp 178.105.8.180:6443: connect: connection refused" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:20.273347 kubelet[2404]: E0413 19:23:20.272992 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.8.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:23:20.274473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301879876.mount: Deactivated successfully. Apr 13 19:23:20.283320 containerd[1605]: time="2026-04-13T19:23:20.283241089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:20.285280 containerd[1605]: time="2026-04-13T19:23:20.285222431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:20.286609 containerd[1605]: time="2026-04-13T19:23:20.286566526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 13 19:23:20.288021 containerd[1605]: time="2026-04-13T19:23:20.287928822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:23:20.290069 containerd[1605]: time="2026-04-13T19:23:20.289280437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:20.293079 containerd[1605]: time="2026-04-13T19:23:20.292209629Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:20.295189 containerd[1605]: time="2026-04-13T19:23:20.295155782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:23:20.300968 containerd[1605]: time="2026-04-13T19:23:20.300916846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:23:20.302000 containerd[1605]: time="2026-04-13T19:23:20.301960218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.317929ms" Apr 13 19:23:20.305037 containerd[1605]: time="2026-04-13T19:23:20.304956491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.604973ms" Apr 13 19:23:20.306091 containerd[1605]: time="2026-04-13T19:23:20.306011903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.001377ms" Apr 13 19:23:20.443608 containerd[1605]: time="2026-04-13T19:23:20.442856064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:20.443608 containerd[1605]: time="2026-04-13T19:23:20.442937465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:20.443608 containerd[1605]: time="2026-04-13T19:23:20.442963346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.443608 containerd[1605]: time="2026-04-13T19:23:20.443066547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.446606 containerd[1605]: time="2026-04-13T19:23:20.446377824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:20.446606 containerd[1605]: time="2026-04-13T19:23:20.446447264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:20.446606 containerd[1605]: time="2026-04-13T19:23:20.446461944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.446847 containerd[1605]: time="2026-04-13T19:23:20.446560026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.450697 containerd[1605]: time="2026-04-13T19:23:20.450410388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:20.450697 containerd[1605]: time="2026-04-13T19:23:20.450471189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:20.451477 containerd[1605]: time="2026-04-13T19:23:20.450684191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.451477 containerd[1605]: time="2026-04-13T19:23:20.451229597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:20.519122 containerd[1605]: time="2026-04-13T19:23:20.518986671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-f-96a1162b98,Uid:edb786c7338f18bcd8f5554ef83c5ec4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb66461be2078ba6c92644cfb83fe2fa723d5babff652adf69ff9b5e9e1184a2\"" Apr 13 19:23:20.526261 containerd[1605]: time="2026-04-13T19:23:20.526227671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-f-96a1162b98,Uid:89d3f498092d1c29f48c85e862629eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a1d2aca420be3f4a5af18172132ef5ec85c87cb5ff0a7b86d57499c7d9a515\"" Apr 13 19:23:20.527853 containerd[1605]: time="2026-04-13T19:23:20.527809889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-f-96a1162b98,Uid:7fce2c2451e945777c3b07a57e2beb31,Namespace:kube-system,Attempt:0,} returns sandbox id \"f976f1ad7a089b1c6663033261736269f4981ff6bba93a58cdef45410a96c5d2\"" Apr 13 19:23:20.532317 containerd[1605]: time="2026-04-13T19:23:20.532198578Z" level=info msg="CreateContainer within sandbox \"fb66461be2078ba6c92644cfb83fe2fa723d5babff652adf69ff9b5e9e1184a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:23:20.533864 containerd[1605]: time="2026-04-13T19:23:20.533759595Z" level=info msg="CreateContainer within sandbox \"a3a1d2aca420be3f4a5af18172132ef5ec85c87cb5ff0a7b86d57499c7d9a515\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:23:20.535494 containerd[1605]: time="2026-04-13T19:23:20.535464454Z" level=info msg="CreateContainer within sandbox \"f976f1ad7a089b1c6663033261736269f4981ff6bba93a58cdef45410a96c5d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:23:20.548638 kubelet[2404]: E0413 19:23:20.548601 2404 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.8.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-f-96a1162b98&limit=500&resourceVersion=0\": dial tcp 178.105.8.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:23:20.551306 containerd[1605]: time="2026-04-13T19:23:20.551262550Z" level=info msg="CreateContainer within sandbox \"fb66461be2078ba6c92644cfb83fe2fa723d5babff652adf69ff9b5e9e1184a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7b601b44502470a0b393047c3713173ccd2e89196a417d0ed20e43b58684195\"" Apr 13 19:23:20.552772 containerd[1605]: time="2026-04-13T19:23:20.552721006Z" level=info msg="StartContainer for \"f7b601b44502470a0b393047c3713173ccd2e89196a417d0ed20e43b58684195\"" Apr 13 19:23:20.559684 containerd[1605]: time="2026-04-13T19:23:20.559552522Z" level=info msg="CreateContainer within sandbox \"a3a1d2aca420be3f4a5af18172132ef5ec85c87cb5ff0a7b86d57499c7d9a515\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93783c7fac353477785117d6a3bdca543375b32a5e2c2d5ad1222577361e028f\"" Apr 13 19:23:20.560655 containerd[1605]: time="2026-04-13T19:23:20.560579013Z" level=info msg="StartContainer for \"93783c7fac353477785117d6a3bdca543375b32a5e2c2d5ad1222577361e028f\"" Apr 13 19:23:20.563521 containerd[1605]: time="2026-04-13T19:23:20.563436965Z" level=info msg="CreateContainer within sandbox \"f976f1ad7a089b1c6663033261736269f4981ff6bba93a58cdef45410a96c5d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45be3e208bd915a3a14742a5a969ab4172b5361746857fcc3e0c8955e0883ace\"" Apr 13 19:23:20.564122 containerd[1605]: time="2026-04-13T19:23:20.563955611Z" level=info msg="StartContainer for \"45be3e208bd915a3a14742a5a969ab4172b5361746857fcc3e0c8955e0883ace\"" Apr 13 19:23:20.649197 containerd[1605]: time="2026-04-13T19:23:20.648739914Z" level=info msg="StartContainer for \"93783c7fac353477785117d6a3bdca543375b32a5e2c2d5ad1222577361e028f\" returns successfully" Apr 13 19:23:20.657693 containerd[1605]: time="2026-04-13T19:23:20.657477211Z" level=info msg="StartContainer for \"f7b601b44502470a0b393047c3713173ccd2e89196a417d0ed20e43b58684195\" returns successfully" Apr 13 19:23:20.682634 containerd[1605]: time="2026-04-13T19:23:20.682259527Z" level=info msg="StartContainer for \"45be3e208bd915a3a14742a5a969ab4172b5361746857fcc3e0c8955e0883ace\" returns successfully" Apr 13 19:23:20.754703 kubelet[2404]: E0413 19:23:20.754649 2404 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.8.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-f-96a1162b98?timeout=10s\": dial tcp 178.105.8.180:6443: connect: connection refused" interval="1.6s" Apr 13 19:23:20.918145 kubelet[2404]: I0413 19:23:20.915816 2404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:21.408060 kubelet[2404]: E0413 19:23:21.407573 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:21.408060 kubelet[2404]: E0413 19:23:21.407969 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:21.412054 kubelet[2404]: E0413 19:23:21.412013 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:22.416500 kubelet[2404]: E0413 19:23:22.414786 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:22.417991 kubelet[2404]: E0413 19:23:22.417825 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:23.835303 kubelet[2404]: E0413 19:23:23.835269 2404 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:23.982308 kubelet[2404]: E0413 19:23:23.982271 2404 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-f-96a1162b98\" not found" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.071187 kubelet[2404]: E0413 19:23:24.070956 2404 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-7-f-96a1162b98.18a600ff402c1666 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-f-96a1162b98,UID:ci-4081-3-7-f-96a1162b98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-f-96a1162b98,},FirstTimestamp:2026-04-13 19:23:19.337530982 +0000 UTC m=+1.493196554,LastTimestamp:2026-04-13 19:23:19.337530982 +0000 UTC m=+1.493196554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-f-96a1162b98,}" Apr 13 19:23:24.114585 kubelet[2404]: I0413 19:23:24.114254 2404 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.153082 kubelet[2404]: I0413 19:23:24.152533 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.180393 kubelet[2404]: E0413 19:23:24.180239 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-f-96a1162b98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.180393 kubelet[2404]: I0413 19:23:24.180277 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.186662 kubelet[2404]: E0413 19:23:24.186619 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.186662 kubelet[2404]: I0413 19:23:24.186659 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.192963 kubelet[2404]: E0413 19:23:24.192924 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.333378 kubelet[2404]: I0413 19:23:24.333317 2404 apiserver.go:52] "Watching apiserver" Apr 13 19:23:24.350736 kubelet[2404]: I0413 19:23:24.350689 2404 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:23:24.403477 kubelet[2404]: I0413 19:23:24.403253 2404 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:24.407383 kubelet[2404]: E0413 19:23:24.407338 2404 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:26.314725 systemd[1]: Reloading requested from client PID 2683 ('systemctl') (unit session-7.scope)... Apr 13 19:23:26.315071 systemd[1]: Reloading... Apr 13 19:23:26.396057 zram_generator::config[2723]: No configuration found. Apr 13 19:23:26.507298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:26.586539 systemd[1]: Reloading finished in 271 ms. Apr 13 19:23:26.615875 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:26.632444 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:23:26.632958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:26.640523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:26.783302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:26.797617 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:23:26.865628 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:26.865628 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:23:26.865628 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:23:26.865628 kubelet[2778]: I0413 19:23:26.865326 2778 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:23:26.881067 kubelet[2778]: I0413 19:23:26.879423 2778 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:23:26.881067 kubelet[2778]: I0413 19:23:26.879459 2778 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:23:26.881067 kubelet[2778]: I0413 19:23:26.879705 2778 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:23:26.881433 kubelet[2778]: I0413 19:23:26.881410 2778 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:23:26.884345 kubelet[2778]: I0413 19:23:26.884287 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:23:26.889813 kubelet[2778]: E0413 19:23:26.889669 2778 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:23:26.890008 kubelet[2778]: I0413 19:23:26.889986 2778 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:23:26.893455 kubelet[2778]: I0413 19:23:26.893418 2778 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:23:26.894287 kubelet[2778]: I0413 19:23:26.894246 2778 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:23:26.894615 kubelet[2778]: I0413 19:23:26.894387 2778 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-f-96a1162b98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:23:26.894771 kubelet[2778]: I0413 19:23:26.894755 2778 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:23:26.894837 kubelet[2778]: I0413 19:23:26.894828 2778 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:23:26.894982 kubelet[2778]: I0413 19:23:26.894969 2778 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:23:26.895271 kubelet[2778]: I0413 19:23:26.895255 2778 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:23:26.895373 kubelet[2778]: I0413 19:23:26.895361 2778 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:23:26.895458 kubelet[2778]: I0413 19:23:26.895448 2778 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:23:26.895522 kubelet[2778]: I0413 19:23:26.895514 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:23:26.898042 kubelet[2778]: I0413 19:23:26.897760 2778 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:23:26.898817 kubelet[2778]: I0413 19:23:26.898495 2778 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:23:26.902695 kubelet[2778]: I0413 19:23:26.901068 2778 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:23:26.902695 kubelet[2778]: I0413 19:23:26.901150 2778 server.go:1289] "Started kubelet" Apr 13 19:23:26.905501 kubelet[2778]: I0413 19:23:26.904013 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:23:26.920294 kubelet[2778]: I0413 19:23:26.920235 2778 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:23:26.922331 kubelet[2778]: I0413 19:23:26.922296 2778 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:23:26.929341 kubelet[2778]: I0413 19:23:26.928771 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:23:26.929341 kubelet[2778]: I0413 19:23:26.929190 2778 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:23:26.937999 kubelet[2778]: I0413 19:23:26.932598 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:23:26.939374 kubelet[2778]: I0413 19:23:26.939346 2778 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:23:26.939723 kubelet[2778]: E0413 19:23:26.939621 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-f-96a1162b98\" not found" Apr 13 19:23:26.942554 kubelet[2778]: I0413 19:23:26.942214 2778 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:23:26.942554 kubelet[2778]: I0413 19:23:26.942364 2778 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:23:26.945441 kubelet[2778]: I0413 19:23:26.944453 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:23:26.945933 kubelet[2778]: I0413 19:23:26.945832 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:23:26.945933 kubelet[2778]: I0413 19:23:26.945927 2778 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:23:26.946000 kubelet[2778]: I0413 19:23:26.945958 2778 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:23:26.946000 kubelet[2778]: I0413 19:23:26.945966 2778 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:23:26.946090 kubelet[2778]: E0413 19:23:26.946014 2778 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:23:26.962731 kubelet[2778]: I0413 19:23:26.962693 2778 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:23:26.963442 kubelet[2778]: I0413 19:23:26.963415 2778 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:23:26.966591 kubelet[2778]: E0413 19:23:26.966546 2778 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:23:26.966956 kubelet[2778]: I0413 19:23:26.966906 2778 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:23:27.023155 kubelet[2778]: I0413 19:23:27.023127 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:23:27.023406 kubelet[2778]: I0413 19:23:27.023389 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:23:27.023536 kubelet[2778]: I0413 19:23:27.023528 2778 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:23:27.023764 kubelet[2778]: I0413 19:23:27.023748 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:23:27.025053 kubelet[2778]: I0413 19:23:27.023836 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:23:27.025053 kubelet[2778]: I0413 19:23:27.023913 2778 policy_none.go:49] "None policy: Start" Apr 13 19:23:27.025053 kubelet[2778]: I0413 19:23:27.023927 2778 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:23:27.025053 kubelet[2778]: I0413 19:23:27.023944 2778 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:23:27.025053 kubelet[2778]: I0413 19:23:27.024076 2778 state_mem.go:75] "Updated machine memory state" Apr 13 19:23:27.025616 kubelet[2778]: E0413 19:23:27.025589 2778 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:23:27.027241 kubelet[2778]: I0413 19:23:27.027226 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:23:27.027365 kubelet[2778]: I0413 19:23:27.027331 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:23:27.027717 kubelet[2778]: I0413 19:23:27.027699 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:23:27.030009 kubelet[2778]: E0413 19:23:27.029798 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:23:27.046809 kubelet[2778]: I0413 19:23:27.046758 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.048011 kubelet[2778]: I0413 19:23:27.047969 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.048536 kubelet[2778]: I0413 19:23:27.048519 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.131897 kubelet[2778]: I0413 19:23:27.131751 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143567 kubelet[2778]: I0413 19:23:27.143319 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143567 kubelet[2778]: I0413 19:23:27.143373 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143567 kubelet[2778]: I0413 19:23:27.143399 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143567 kubelet[2778]: I0413 19:23:27.143424 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143567 kubelet[2778]: I0413 19:23:27.143462 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edb786c7338f18bcd8f5554ef83c5ec4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" (UID: \"edb786c7338f18bcd8f5554ef83c5ec4\") " pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143830 kubelet[2778]: I0413 19:23:27.143483 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143830 kubelet[2778]: I0413 19:23:27.143550 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143830 kubelet[2778]: I0413 19:23:27.143606 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89d3f498092d1c29f48c85e862629eb1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-f-96a1162b98\" (UID: \"89d3f498092d1c29f48c85e862629eb1\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.143830 kubelet[2778]: I0413 19:23:27.143637 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fce2c2451e945777c3b07a57e2beb31-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-f-96a1162b98\" (UID: \"7fce2c2451e945777c3b07a57e2beb31\") " pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.146677 kubelet[2778]: I0413 19:23:27.146142 2778 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.146677 kubelet[2778]: I0413 19:23:27.146243 2778 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-f-96a1162b98" Apr 13 19:23:27.310257 sudo[2813]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:23:27.310550 sudo[2813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:23:27.811048 sudo[2813]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:27.897277 kubelet[2778]: I0413 19:23:27.897003 2778 apiserver.go:52] "Watching apiserver" Apr 13 19:23:27.942775 kubelet[2778]: I0413 19:23:27.942721 2778 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:23:27.997630 kubelet[2778]: I0413 19:23:27.997584 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:28.009334 kubelet[2778]: E0413 19:23:28.009297 2778 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-f-96a1162b98\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" Apr 13 19:23:28.049393 kubelet[2778]: I0413 19:23:28.049299 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-f-96a1162b98" podStartSLOduration=1.049277594 podStartE2EDuration="1.049277594s" podCreationTimestamp="2026-04-13 19:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:28.046392209 +0000 UTC m=+1.239964263" watchObservedRunningTime="2026-04-13 19:23:28.049277594 +0000 UTC m=+1.242849648" Apr 13 19:23:28.049544 kubelet[2778]: I0413 19:23:28.049457 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-f-96a1162b98" podStartSLOduration=1.049450116 podStartE2EDuration="1.049450116s" podCreationTimestamp="2026-04-13 19:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:28.029463063 +0000 UTC m=+1.223035157" watchObservedRunningTime="2026-04-13 19:23:28.049450116 +0000 UTC m=+1.243022170" Apr 13 19:23:28.085125 kubelet[2778]: I0413 19:23:28.084710 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-f-96a1162b98" podStartSLOduration=1.084692581 podStartE2EDuration="1.084692581s" podCreationTimestamp="2026-04-13 19:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:28.064375045 +0000 UTC m=+1.257947139" watchObservedRunningTime="2026-04-13 19:23:28.084692581 +0000 UTC m=+1.278264635" Apr 13 19:23:29.795706 sudo[1870]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:29.812194 sshd[1866]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:29.817920 systemd[1]: sshd@6-178.105.8.180:22-50.85.169.122:60560.service: Deactivated successfully. Apr 13 19:23:29.821386 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:23:29.822420 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:23:29.823595 systemd-logind[1571]: Removed session 7. Apr 13 19:23:32.284896 kubelet[2778]: I0413 19:23:32.284855 2778 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:23:32.286085 containerd[1605]: time="2026-04-13T19:23:32.286016270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:23:32.286854 kubelet[2778]: I0413 19:23:32.286447 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:23:32.985316 kubelet[2778]: I0413 19:23:32.985238 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-xtables-lock\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985316 kubelet[2778]: I0413 19:23:32.985321 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-lib-modules\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985644 kubelet[2778]: I0413 19:23:32.985400 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-config-path\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985644 kubelet[2778]: I0413 19:23:32.985440 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-net\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985644 kubelet[2778]: I0413 19:23:32.985544 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-run\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985644 kubelet[2778]: I0413 19:23:32.985580 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-etc-cni-netd\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985644 kubelet[2778]: I0413 19:23:32.985635 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dc0b252-8426-470d-b95c-77b0da19e18d-clustermesh-secrets\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985989 kubelet[2778]: I0413 19:23:32.985690 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-kernel\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985989 kubelet[2778]: I0413 19:23:32.985759 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-hubble-tls\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985989 kubelet[2778]: I0413 19:23:32.985792 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqkd\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.985989 kubelet[2778]: I0413 19:23:32.985932 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-929r6\" (UniqueName: \"kubernetes.io/projected/9f65ef23-145a-4dd7-871d-f7035d3bbca3-kube-api-access-929r6\") pod \"kube-proxy-f5vn6\" (UID: \"9f65ef23-145a-4dd7-871d-f7035d3bbca3\") " pod="kube-system/kube-proxy-f5vn6" Apr 13 19:23:32.986198 kubelet[2778]: I0413 19:23:32.986009 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f65ef23-145a-4dd7-871d-f7035d3bbca3-kube-proxy\") pod \"kube-proxy-f5vn6\" (UID: \"9f65ef23-145a-4dd7-871d-f7035d3bbca3\") " pod="kube-system/kube-proxy-f5vn6" Apr 13 19:23:32.986198 kubelet[2778]: I0413 19:23:32.986073 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f65ef23-145a-4dd7-871d-f7035d3bbca3-xtables-lock\") pod \"kube-proxy-f5vn6\" (UID: \"9f65ef23-145a-4dd7-871d-f7035d3bbca3\") " pod="kube-system/kube-proxy-f5vn6" Apr 13 19:23:32.986198 kubelet[2778]: I0413 19:23:32.986108 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f65ef23-145a-4dd7-871d-f7035d3bbca3-lib-modules\") pod \"kube-proxy-f5vn6\" (UID: \"9f65ef23-145a-4dd7-871d-f7035d3bbca3\") " pod="kube-system/kube-proxy-f5vn6" Apr 13 19:23:32.986198 kubelet[2778]: I0413 19:23:32.986160 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-bpf-maps\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.986198 kubelet[2778]: I0413 19:23:32.986192 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-hostproc\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.986464 kubelet[2778]: I0413 19:23:32.986228 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-cgroup\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:32.986464 kubelet[2778]: I0413 19:23:32.986264 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cni-path\") pod \"cilium-g229w\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " pod="kube-system/cilium-g229w" Apr 13 19:23:33.115055 kubelet[2778]: E0413 19:23:33.113800 2778 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 19:23:33.115055 kubelet[2778]: E0413 19:23:33.113853 2778 projected.go:194] Error preparing data for projected volume kube-api-access-vwqkd for pod kube-system/cilium-g229w: configmap "kube-root-ca.crt" not found Apr 13 19:23:33.115055 kubelet[2778]: E0413 19:23:33.113928 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd podName:0dc0b252-8426-470d-b95c-77b0da19e18d nodeName:}" failed. No retries permitted until 2026-04-13 19:23:33.613902238 +0000 UTC m=+6.807474292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vwqkd" (UniqueName: "kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd") pod "cilium-g229w" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d") : configmap "kube-root-ca.crt" not found Apr 13 19:23:33.116019 kubelet[2778]: E0413 19:23:33.115989 2778 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 19:23:33.116649 kubelet[2778]: E0413 19:23:33.116287 2778 projected.go:194] Error preparing data for projected volume kube-api-access-929r6 for pod kube-system/kube-proxy-f5vn6: configmap "kube-root-ca.crt" not found Apr 13 19:23:33.116649 kubelet[2778]: E0413 19:23:33.116377 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f65ef23-145a-4dd7-871d-f7035d3bbca3-kube-api-access-929r6 podName:9f65ef23-145a-4dd7-871d-f7035d3bbca3 nodeName:}" failed. No retries permitted until 2026-04-13 19:23:33.616352937 +0000 UTC m=+6.809924991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-929r6" (UniqueName: "kubernetes.io/projected/9f65ef23-145a-4dd7-871d-f7035d3bbca3-kube-api-access-929r6") pod "kube-proxy-f5vn6" (UID: "9f65ef23-145a-4dd7-871d-f7035d3bbca3") : configmap "kube-root-ca.crt" not found Apr 13 19:23:33.591121 kubelet[2778]: I0413 19:23:33.590966 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84kv\" (UniqueName: \"kubernetes.io/projected/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-kube-api-access-q84kv\") pod \"cilium-operator-6c4d7847fc-pxw6z\" (UID: \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\") " pod="kube-system/cilium-operator-6c4d7847fc-pxw6z" Apr 13 19:23:33.591121 kubelet[2778]: I0413 19:23:33.591075 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pxw6z\" (UID: \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\") " pod="kube-system/cilium-operator-6c4d7847fc-pxw6z" Apr 13 19:23:33.801041 containerd[1605]: time="2026-04-13T19:23:33.800580921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f5vn6,Uid:9f65ef23-145a-4dd7-871d-f7035d3bbca3,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:33.808094 containerd[1605]: time="2026-04-13T19:23:33.807536534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pxw6z,Uid:c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:33.833512 containerd[1605]: time="2026-04-13T19:23:33.833399372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:33.833706 containerd[1605]: time="2026-04-13T19:23:33.833482573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:33.833706 containerd[1605]: time="2026-04-13T19:23:33.833498213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.833706 containerd[1605]: time="2026-04-13T19:23:33.833602094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.835008 containerd[1605]: time="2026-04-13T19:23:33.834618021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g229w,Uid:0dc0b252-8426-470d-b95c-77b0da19e18d,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:33.844899 containerd[1605]: time="2026-04-13T19:23:33.844508657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:33.844899 containerd[1605]: time="2026-04-13T19:23:33.844576457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:33.844899 containerd[1605]: time="2026-04-13T19:23:33.844587297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.844899 containerd[1605]: time="2026-04-13T19:23:33.844666898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.891216 containerd[1605]: time="2026-04-13T19:23:33.890661929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:33.891216 containerd[1605]: time="2026-04-13T19:23:33.890743490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:33.891216 containerd[1605]: time="2026-04-13T19:23:33.890755530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.891216 containerd[1605]: time="2026-04-13T19:23:33.890874731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:33.893555 containerd[1605]: time="2026-04-13T19:23:33.893516351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f5vn6,Uid:9f65ef23-145a-4dd7-871d-f7035d3bbca3,Namespace:kube-system,Attempt:0,} returns sandbox id \"980c8f672d39bb283f2eddaab6a6f5bdaac31841e8814a054ce9d13788c66541\"" Apr 13 19:23:33.903448 containerd[1605]: time="2026-04-13T19:23:33.903396986Z" level=info msg="CreateContainer within sandbox \"980c8f672d39bb283f2eddaab6a6f5bdaac31841e8814a054ce9d13788c66541\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:23:33.914255 containerd[1605]: time="2026-04-13T19:23:33.913839346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pxw6z,Uid:c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\"" Apr 13 19:23:33.917419 containerd[1605]: time="2026-04-13T19:23:33.917367893Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:23:33.929846 containerd[1605]: time="2026-04-13T19:23:33.929448345Z" level=info msg="CreateContainer within sandbox \"980c8f672d39bb283f2eddaab6a6f5bdaac31841e8814a054ce9d13788c66541\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7363394c63276d660196000515730c7273ccc98645d95e7aff20890a0a06e3f2\"" Apr 13 19:23:33.932374 containerd[1605]: time="2026-04-13T19:23:33.932334407Z" level=info msg="StartContainer for \"7363394c63276d660196000515730c7273ccc98645d95e7aff20890a0a06e3f2\"" Apr 13 19:23:33.948211 containerd[1605]: time="2026-04-13T19:23:33.948173848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g229w,Uid:0dc0b252-8426-470d-b95c-77b0da19e18d,Namespace:kube-system,Attempt:0,} returns sandbox id \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\"" Apr 13 19:23:33.998369 containerd[1605]: time="2026-04-13T19:23:33.998249031Z" level=info msg="StartContainer for \"7363394c63276d660196000515730c7273ccc98645d95e7aff20890a0a06e3f2\" returns successfully" Apr 13 19:23:34.029379 kubelet[2778]: I0413 19:23:34.029290 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f5vn6" podStartSLOduration=2.029268703 podStartE2EDuration="2.029268703s" podCreationTimestamp="2026-04-13 19:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:34.026549763 +0000 UTC m=+7.220121817" watchObservedRunningTime="2026-04-13 19:23:34.029268703 +0000 UTC m=+7.222840757" Apr 13 19:23:35.729948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079348815.mount: Deactivated successfully. Apr 13 19:23:36.597256 containerd[1605]: time="2026-04-13T19:23:36.597196951Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:36.598861 containerd[1605]: time="2026-04-13T19:23:36.598682642Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:23:36.601611 containerd[1605]: time="2026-04-13T19:23:36.600058412Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:36.601611 containerd[1605]: time="2026-04-13T19:23:36.601481542Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.683890127s" Apr 13 19:23:36.601611 containerd[1605]: time="2026-04-13T19:23:36.601516342Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:23:36.605293 containerd[1605]: time="2026-04-13T19:23:36.604069241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:23:36.608189 containerd[1605]: time="2026-04-13T19:23:36.608156070Z" level=info msg="CreateContainer within sandbox \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:23:36.627016 containerd[1605]: time="2026-04-13T19:23:36.626928365Z" level=info msg="CreateContainer within sandbox \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\"" Apr 13 19:23:36.629067 containerd[1605]: time="2026-04-13T19:23:36.628961699Z" level=info msg="StartContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\"" Apr 13 19:23:36.685124 containerd[1605]: time="2026-04-13T19:23:36.685060061Z" level=info msg="StartContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" returns successfully" Apr 13 19:23:37.070665 kubelet[2778]: I0413 19:23:37.070585 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pxw6z" podStartSLOduration=1.383892468 podStartE2EDuration="4.070567696s" podCreationTimestamp="2026-04-13 19:23:33 +0000 UTC" firstStartedPulling="2026-04-13 19:23:33.916596807 +0000 UTC m=+7.110168861" lastFinishedPulling="2026-04-13 19:23:36.603272075 +0000 UTC m=+9.796844089" observedRunningTime="2026-04-13 19:23:37.069212006 +0000 UTC m=+10.262784060" watchObservedRunningTime="2026-04-13 19:23:37.070567696 +0000 UTC m=+10.264139750" Apr 13 19:23:42.773164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066736495.mount: Deactivated successfully. Apr 13 19:23:44.185074 containerd[1605]: time="2026-04-13T19:23:44.183668679Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:44.185074 containerd[1605]: time="2026-04-13T19:23:44.184415443Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:23:44.186232 containerd[1605]: time="2026-04-13T19:23:44.186186615Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:44.188202 containerd[1605]: time="2026-04-13T19:23:44.188161547Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.584042866s" Apr 13 19:23:44.188335 containerd[1605]: time="2026-04-13T19:23:44.188315868Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:23:44.195641 containerd[1605]: time="2026-04-13T19:23:44.195592994Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:23:44.209298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897010614.mount: Deactivated successfully. Apr 13 19:23:44.223685 containerd[1605]: time="2026-04-13T19:23:44.223617690Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\"" Apr 13 19:23:44.225516 containerd[1605]: time="2026-04-13T19:23:44.225360941Z" level=info msg="StartContainer for \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\"" Apr 13 19:23:44.284213 containerd[1605]: time="2026-04-13T19:23:44.283827429Z" level=info msg="StartContainer for \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\" returns successfully" Apr 13 19:23:44.414272 containerd[1605]: time="2026-04-13T19:23:44.414019407Z" level=info msg="shim disconnected" id=53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a namespace=k8s.io Apr 13 19:23:44.414272 containerd[1605]: time="2026-04-13T19:23:44.414094008Z" level=warning msg="cleaning up after shim disconnected" id=53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a namespace=k8s.io Apr 13 19:23:44.414272 containerd[1605]: time="2026-04-13T19:23:44.414104368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:45.065619 containerd[1605]: time="2026-04-13T19:23:45.065541579Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:23:45.078826 containerd[1605]: time="2026-04-13T19:23:45.078644100Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\"" Apr 13 19:23:45.080178 containerd[1605]: time="2026-04-13T19:23:45.079677907Z" level=info msg="StartContainer for \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\"" Apr 13 19:23:45.139762 containerd[1605]: time="2026-04-13T19:23:45.139677679Z" level=info msg="StartContainer for \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\" returns successfully" Apr 13 19:23:45.151770 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:23:45.152733 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:45.153069 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:45.165781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:45.183080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:45.194101 containerd[1605]: time="2026-04-13T19:23:45.193863775Z" level=info msg="shim disconnected" id=4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8 namespace=k8s.io Apr 13 19:23:45.194101 containerd[1605]: time="2026-04-13T19:23:45.193919616Z" level=warning msg="cleaning up after shim disconnected" id=4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8 namespace=k8s.io Apr 13 19:23:45.194101 containerd[1605]: time="2026-04-13T19:23:45.193927456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:45.203840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a-rootfs.mount: Deactivated successfully. Apr 13 19:23:46.082816 containerd[1605]: time="2026-04-13T19:23:46.081920321Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:23:46.130920 containerd[1605]: time="2026-04-13T19:23:46.130844661Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\"" Apr 13 19:23:46.132668 containerd[1605]: time="2026-04-13T19:23:46.132622432Z" level=info msg="StartContainer for \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\"" Apr 13 19:23:46.196437 containerd[1605]: time="2026-04-13T19:23:46.196379663Z" level=info msg="StartContainer for \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\" returns successfully" Apr 13 19:23:46.225172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60-rootfs.mount: Deactivated successfully. Apr 13 19:23:46.227777 containerd[1605]: time="2026-04-13T19:23:46.227558014Z" level=info msg="shim disconnected" id=5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60 namespace=k8s.io Apr 13 19:23:46.227777 containerd[1605]: time="2026-04-13T19:23:46.227643454Z" level=warning msg="cleaning up after shim disconnected" id=5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60 namespace=k8s.io Apr 13 19:23:46.227777 containerd[1605]: time="2026-04-13T19:23:46.227672654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:47.080823 containerd[1605]: time="2026-04-13T19:23:47.078498145Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:23:47.114211 containerd[1605]: time="2026-04-13T19:23:47.114154361Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\"" Apr 13 19:23:47.115108 containerd[1605]: time="2026-04-13T19:23:47.115064006Z" level=info msg="StartContainer for \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\"" Apr 13 19:23:47.243838 containerd[1605]: time="2026-04-13T19:23:47.243787826Z" level=info msg="StartContainer for \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\" returns successfully" Apr 13 19:23:47.266698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686-rootfs.mount: Deactivated successfully. Apr 13 19:23:47.271504 containerd[1605]: time="2026-04-13T19:23:47.271206752Z" level=info msg="shim disconnected" id=10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686 namespace=k8s.io Apr 13 19:23:47.271504 containerd[1605]: time="2026-04-13T19:23:47.271273113Z" level=warning msg="cleaning up after shim disconnected" id=10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686 namespace=k8s.io Apr 13 19:23:47.271504 containerd[1605]: time="2026-04-13T19:23:47.271280633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:23:48.093175 containerd[1605]: time="2026-04-13T19:23:48.093048085Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:23:48.118179 containerd[1605]: time="2026-04-13T19:23:48.118131556Z" level=info msg="CreateContainer within sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\"" Apr 13 19:23:48.119959 containerd[1605]: time="2026-04-13T19:23:48.118872120Z" level=info msg="StartContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\"" Apr 13 19:23:48.178280 containerd[1605]: time="2026-04-13T19:23:48.178124635Z" level=info msg="StartContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" returns successfully" Apr 13 19:23:48.323630 kubelet[2778]: I0413 19:23:48.323563 2778 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:23:48.512995 kubelet[2778]: I0413 19:23:48.512956 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85vb\" (UniqueName: \"kubernetes.io/projected/01fe312a-afd7-4072-8368-e282e2eccb80-kube-api-access-d85vb\") pod \"coredns-674b8bbfcf-26h4c\" (UID: \"01fe312a-afd7-4072-8368-e282e2eccb80\") " pod="kube-system/coredns-674b8bbfcf-26h4c" Apr 13 19:23:48.513154 kubelet[2778]: I0413 19:23:48.513011 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01fe312a-afd7-4072-8368-e282e2eccb80-config-volume\") pod \"coredns-674b8bbfcf-26h4c\" (UID: \"01fe312a-afd7-4072-8368-e282e2eccb80\") " pod="kube-system/coredns-674b8bbfcf-26h4c" Apr 13 19:23:48.513154 kubelet[2778]: I0413 19:23:48.513038 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/937c3919-3971-4ce0-b203-59e1ffd6ec06-config-volume\") pod \"coredns-674b8bbfcf-5gtp7\" (UID: \"937c3919-3971-4ce0-b203-59e1ffd6ec06\") " pod="kube-system/coredns-674b8bbfcf-5gtp7" Apr 13 19:23:48.513154 kubelet[2778]: I0413 19:23:48.513057 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwst\" (UniqueName: \"kubernetes.io/projected/937c3919-3971-4ce0-b203-59e1ffd6ec06-kube-api-access-fmwst\") pod \"coredns-674b8bbfcf-5gtp7\" (UID: \"937c3919-3971-4ce0-b203-59e1ffd6ec06\") " pod="kube-system/coredns-674b8bbfcf-5gtp7" Apr 13 19:23:48.685693 containerd[1605]: time="2026-04-13T19:23:48.685273074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26h4c,Uid:01fe312a-afd7-4072-8368-e282e2eccb80,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:48.687289 containerd[1605]: time="2026-04-13T19:23:48.687073964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5gtp7,Uid:937c3919-3971-4ce0-b203-59e1ffd6ec06,Namespace:kube-system,Attempt:0,}" Apr 13 19:23:49.110043 kubelet[2778]: I0413 19:23:49.109387 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g229w" podStartSLOduration=6.869416793 podStartE2EDuration="17.109370248s" podCreationTimestamp="2026-04-13 19:23:32 +0000 UTC" firstStartedPulling="2026-04-13 19:23:33.950292065 +0000 UTC m=+7.143864119" lastFinishedPulling="2026-04-13 19:23:44.19024552 +0000 UTC m=+17.383817574" observedRunningTime="2026-04-13 19:23:49.108869245 +0000 UTC m=+22.302441299" watchObservedRunningTime="2026-04-13 19:23:49.109370248 +0000 UTC m=+22.302942302" Apr 13 19:23:49.962470 systemd-networkd[1249]: cilium_host: Link UP Apr 13 19:23:49.964383 systemd-networkd[1249]: cilium_net: Link UP Apr 13 19:23:49.964387 systemd-networkd[1249]: cilium_net: Gained carrier Apr 13 19:23:49.964626 systemd-networkd[1249]: cilium_host: Gained carrier Apr 13 19:23:50.080941 systemd-networkd[1249]: cilium_vxlan: Link UP Apr 13 19:23:50.081226 systemd-networkd[1249]: cilium_vxlan: Gained carrier Apr 13 19:23:50.364518 kernel: NET: Registered PF_ALG protocol family Apr 13 19:23:50.416491 systemd-networkd[1249]: cilium_net: Gained IPv6LL Apr 13 19:23:50.569207 systemd-networkd[1249]: cilium_host: Gained IPv6LL Apr 13 19:23:51.107976 systemd-networkd[1249]: lxc_health: Link UP Apr 13 19:23:51.112417 systemd-networkd[1249]: lxc_health: Gained carrier Apr 13 19:23:51.268307 systemd-networkd[1249]: lxcfb0ac2e811f3: Link UP Apr 13 19:23:51.276130 kernel: eth0: renamed from tmp05b95 Apr 13 19:23:51.278644 systemd-networkd[1249]: lxcfb0ac2e811f3: Gained carrier Apr 13 19:23:51.300939 systemd-networkd[1249]: lxc4a119d892012: Link UP Apr 13 19:23:51.304156 kernel: eth0: renamed from tmpfaa22 Apr 13 19:23:51.313795 systemd-networkd[1249]: lxc4a119d892012: Gained carrier Apr 13 19:23:51.849101 systemd-networkd[1249]: cilium_vxlan: Gained IPv6LL Apr 13 19:23:52.168327 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 13 19:23:53.003212 systemd-networkd[1249]: lxcfb0ac2e811f3: Gained IPv6LL Apr 13 19:23:53.192440 systemd-networkd[1249]: lxc4a119d892012: Gained IPv6LL Apr 13 19:23:55.428054 containerd[1605]: time="2026-04-13T19:23:55.425119633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:55.428054 containerd[1605]: time="2026-04-13T19:23:55.425215561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:55.428054 containerd[1605]: time="2026-04-13T19:23:55.425229842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:55.428054 containerd[1605]: time="2026-04-13T19:23:55.425359013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:55.464934 systemd[1]: run-containerd-runc-k8s.io-05b956f2993aefc9354d8d50cda8b09ed62dfd5ae13703849099fb774c99114a-runc.xxADNQ.mount: Deactivated successfully. Apr 13 19:23:55.503609 containerd[1605]: time="2026-04-13T19:23:55.503452033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:23:55.503609 containerd[1605]: time="2026-04-13T19:23:55.503527079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:23:55.506045 containerd[1605]: time="2026-04-13T19:23:55.503585644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:55.506045 containerd[1605]: time="2026-04-13T19:23:55.503693214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:23:55.515333 containerd[1605]: time="2026-04-13T19:23:55.515294461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5gtp7,Uid:937c3919-3971-4ce0-b203-59e1ffd6ec06,Namespace:kube-system,Attempt:0,} returns sandbox id \"05b956f2993aefc9354d8d50cda8b09ed62dfd5ae13703849099fb774c99114a\"" Apr 13 19:23:55.526603 containerd[1605]: time="2026-04-13T19:23:55.525964707Z" level=info msg="CreateContainer within sandbox \"05b956f2993aefc9354d8d50cda8b09ed62dfd5ae13703849099fb774c99114a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:23:55.552686 containerd[1605]: time="2026-04-13T19:23:55.552626061Z" level=info msg="CreateContainer within sandbox \"05b956f2993aefc9354d8d50cda8b09ed62dfd5ae13703849099fb774c99114a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b8b4b5ecd9eff711403708ffabe0e207aa9a01a47907fdd9ae9dcb714bb51dd\"" Apr 13 19:23:55.556015 containerd[1605]: time="2026-04-13T19:23:55.555970432Z" level=info msg="StartContainer for \"4b8b4b5ecd9eff711403708ffabe0e207aa9a01a47907fdd9ae9dcb714bb51dd\"" Apr 13 19:23:55.586744 containerd[1605]: time="2026-04-13T19:23:55.586691259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26h4c,Uid:01fe312a-afd7-4072-8368-e282e2eccb80,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa226568857ab4b602886f62e74ec1057856932492410649a7b02d8d6645132\"" Apr 13 19:23:55.598869 containerd[1605]: time="2026-04-13T19:23:55.598821232Z" level=info msg="CreateContainer within sandbox \"faa226568857ab4b602886f62e74ec1057856932492410649a7b02d8d6645132\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:23:55.618054 containerd[1605]: time="2026-04-13T19:23:55.617917009Z" level=info msg="CreateContainer within sandbox \"faa226568857ab4b602886f62e74ec1057856932492410649a7b02d8d6645132\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"054fcd466bdcf549079ec34512ce009735f9756f3ef16128292770aad906f55c\"" Apr 13 19:23:55.620275 containerd[1605]: time="2026-04-13T19:23:55.620222209Z" level=info msg="StartContainer for \"054fcd466bdcf549079ec34512ce009735f9756f3ef16128292770aad906f55c\"" Apr 13 19:23:55.637075 containerd[1605]: time="2026-04-13T19:23:55.636200476Z" level=info msg="StartContainer for \"4b8b4b5ecd9eff711403708ffabe0e207aa9a01a47907fdd9ae9dcb714bb51dd\" returns successfully" Apr 13 19:23:55.696890 containerd[1605]: time="2026-04-13T19:23:55.696588839Z" level=info msg="StartContainer for \"054fcd466bdcf549079ec34512ce009735f9756f3ef16128292770aad906f55c\" returns successfully" Apr 13 19:23:56.131084 kubelet[2778]: I0413 19:23:56.130011 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5gtp7" podStartSLOduration=23.129994608 podStartE2EDuration="23.129994608s" podCreationTimestamp="2026-04-13 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:56.128527244 +0000 UTC m=+29.322099338" watchObservedRunningTime="2026-04-13 19:23:56.129994608 +0000 UTC m=+29.323566662" Apr 13 19:23:56.178290 kubelet[2778]: I0413 19:23:56.178199 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-26h4c" podStartSLOduration=23.178175481 podStartE2EDuration="23.178175481s" podCreationTimestamp="2026-04-13 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:23:56.152365939 +0000 UTC m=+29.345937993" watchObservedRunningTime="2026-04-13 19:23:56.178175481 +0000 UTC m=+29.371747535" Apr 13 19:23:56.433799 systemd[1]: run-containerd-runc-k8s.io-faa226568857ab4b602886f62e74ec1057856932492410649a7b02d8d6645132-runc.Kqeek9.mount: Deactivated successfully. Apr 13 19:25:38.875524 systemd[1]: Started sshd@7-178.105.8.180:22-50.85.169.122:33060.service - OpenSSH per-connection server daemon (50.85.169.122:33060). Apr 13 19:25:38.997845 sshd[4179]: Accepted publickey for core from 50.85.169.122 port 33060 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:38.999876 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:39.005621 systemd-logind[1571]: New session 8 of user core. Apr 13 19:25:39.009343 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:39.198334 sshd[4179]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:39.203937 systemd[1]: sshd@7-178.105.8.180:22-50.85.169.122:33060.service: Deactivated successfully. Apr 13 19:25:39.209743 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:39.213172 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:39.214607 systemd-logind[1571]: Removed session 8. Apr 13 19:25:44.220426 systemd[1]: Started sshd@8-178.105.8.180:22-50.85.169.122:37952.service - OpenSSH per-connection server daemon (50.85.169.122:37952). Apr 13 19:25:44.335366 sshd[4195]: Accepted publickey for core from 50.85.169.122 port 37952 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:44.336783 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:44.343730 systemd-logind[1571]: New session 9 of user core. Apr 13 19:25:44.353591 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:44.528277 sshd[4195]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:44.532113 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:44.532406 systemd[1]: sshd@8-178.105.8.180:22-50.85.169.122:37952.service: Deactivated successfully. Apr 13 19:25:44.536655 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:44.538350 systemd-logind[1571]: Removed session 9. Apr 13 19:25:49.553586 systemd[1]: Started sshd@9-178.105.8.180:22-50.85.169.122:59908.service - OpenSSH per-connection server daemon (50.85.169.122:59908). Apr 13 19:25:49.679717 sshd[4209]: Accepted publickey for core from 50.85.169.122 port 59908 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:49.683839 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:49.690302 systemd-logind[1571]: New session 10 of user core. Apr 13 19:25:49.699017 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:25:49.880825 sshd[4209]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:49.885798 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:25:49.886151 systemd[1]: sshd@9-178.105.8.180:22-50.85.169.122:59908.service: Deactivated successfully. Apr 13 19:25:49.891814 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:25:49.893421 systemd-logind[1571]: Removed session 10. Apr 13 19:25:54.905342 systemd[1]: Started sshd@10-178.105.8.180:22-50.85.169.122:59924.service - OpenSSH per-connection server daemon (50.85.169.122:59924). Apr 13 19:25:55.033062 sshd[4224]: Accepted publickey for core from 50.85.169.122 port 59924 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:55.034354 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:55.040485 systemd-logind[1571]: New session 11 of user core. Apr 13 19:25:55.043619 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:25:55.219471 sshd[4224]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:55.226617 systemd[1]: sshd@10-178.105.8.180:22-50.85.169.122:59924.service: Deactivated successfully. Apr 13 19:25:55.231769 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:25:55.234154 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:25:55.241380 systemd[1]: Started sshd@11-178.105.8.180:22-50.85.169.122:59930.service - OpenSSH per-connection server daemon (50.85.169.122:59930). Apr 13 19:25:55.242561 systemd-logind[1571]: Removed session 11. Apr 13 19:25:55.377096 sshd[4238]: Accepted publickey for core from 50.85.169.122 port 59930 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:55.380297 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:55.384813 systemd-logind[1571]: New session 12 of user core. Apr 13 19:25:55.396520 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:25:55.620305 sshd[4238]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:55.630689 systemd[1]: sshd@11-178.105.8.180:22-50.85.169.122:59930.service: Deactivated successfully. Apr 13 19:25:55.637371 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:25:55.638740 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:25:55.654174 systemd[1]: Started sshd@12-178.105.8.180:22-50.85.169.122:59940.service - OpenSSH per-connection server daemon (50.85.169.122:59940). Apr 13 19:25:55.655673 systemd-logind[1571]: Removed session 12. Apr 13 19:25:55.776316 sshd[4250]: Accepted publickey for core from 50.85.169.122 port 59940 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:25:55.777382 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:55.783078 systemd-logind[1571]: New session 13 of user core. Apr 13 19:25:55.790281 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:25:55.960628 sshd[4250]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:55.965234 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:25:55.965489 systemd[1]: sshd@12-178.105.8.180:22-50.85.169.122:59940.service: Deactivated successfully. Apr 13 19:25:55.970312 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:25:55.971805 systemd-logind[1571]: Removed session 13. Apr 13 19:26:00.987407 systemd[1]: Started sshd@13-178.105.8.180:22-50.85.169.122:54398.service - OpenSSH per-connection server daemon (50.85.169.122:54398). Apr 13 19:26:01.109551 sshd[4263]: Accepted publickey for core from 50.85.169.122 port 54398 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:01.111373 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:01.119978 systemd-logind[1571]: New session 14 of user core. Apr 13 19:26:01.123443 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:01.304406 sshd[4263]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:01.309580 systemd[1]: sshd@13-178.105.8.180:22-50.85.169.122:54398.service: Deactivated successfully. Apr 13 19:26:01.314218 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:01.315524 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:01.316823 systemd-logind[1571]: Removed session 14. Apr 13 19:26:06.335543 systemd[1]: Started sshd@14-178.105.8.180:22-50.85.169.122:54402.service - OpenSSH per-connection server daemon (50.85.169.122:54402). Apr 13 19:26:06.458375 sshd[4278]: Accepted publickey for core from 50.85.169.122 port 54402 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:06.460663 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:06.466589 systemd-logind[1571]: New session 15 of user core. Apr 13 19:26:06.472534 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:06.652123 sshd[4278]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:06.658733 systemd[1]: sshd@14-178.105.8.180:22-50.85.169.122:54402.service: Deactivated successfully. Apr 13 19:26:06.662973 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:06.664091 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:06.665314 systemd-logind[1571]: Removed session 15. Apr 13 19:26:06.675605 systemd[1]: Started sshd@15-178.105.8.180:22-50.85.169.122:54406.service - OpenSSH per-connection server daemon (50.85.169.122:54406). Apr 13 19:26:06.795008 sshd[4292]: Accepted publickey for core from 50.85.169.122 port 54406 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:06.797548 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:06.805352 systemd-logind[1571]: New session 16 of user core. Apr 13 19:26:06.810583 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:07.075330 sshd[4292]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:07.082459 systemd[1]: sshd@15-178.105.8.180:22-50.85.169.122:54406.service: Deactivated successfully. Apr 13 19:26:07.086738 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:07.087398 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:07.089228 systemd-logind[1571]: Removed session 16. Apr 13 19:26:07.099318 systemd[1]: Started sshd@16-178.105.8.180:22-50.85.169.122:54416.service - OpenSSH per-connection server daemon (50.85.169.122:54416). Apr 13 19:26:07.224276 sshd[4303]: Accepted publickey for core from 50.85.169.122 port 54416 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:07.225568 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:07.231474 systemd-logind[1571]: New session 17 of user core. Apr 13 19:26:07.235633 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:08.086907 sshd[4303]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:08.102426 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:08.104277 systemd[1]: sshd@16-178.105.8.180:22-50.85.169.122:54416.service: Deactivated successfully. Apr 13 19:26:08.107642 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:08.115553 systemd[1]: Started sshd@17-178.105.8.180:22-50.85.169.122:54430.service - OpenSSH per-connection server daemon (50.85.169.122:54430). Apr 13 19:26:08.118124 systemd-logind[1571]: Removed session 17. Apr 13 19:26:08.246815 sshd[4322]: Accepted publickey for core from 50.85.169.122 port 54430 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:08.249565 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:08.254826 systemd-logind[1571]: New session 18 of user core. Apr 13 19:26:08.259438 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:08.582776 sshd[4322]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:08.592785 systemd[1]: sshd@17-178.105.8.180:22-50.85.169.122:54430.service: Deactivated successfully. Apr 13 19:26:08.597670 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:08.601155 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:08.610427 systemd[1]: Started sshd@18-178.105.8.180:22-50.85.169.122:54446.service - OpenSSH per-connection server daemon (50.85.169.122:54446). Apr 13 19:26:08.612494 systemd-logind[1571]: Removed session 18. Apr 13 19:26:08.732591 sshd[4335]: Accepted publickey for core from 50.85.169.122 port 54446 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:08.738620 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:08.746303 systemd-logind[1571]: New session 19 of user core. Apr 13 19:26:08.749385 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:08.923369 sshd[4335]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:08.930668 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:08.931450 systemd[1]: sshd@18-178.105.8.180:22-50.85.169.122:54446.service: Deactivated successfully. Apr 13 19:26:08.934557 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:08.935778 systemd-logind[1571]: Removed session 19. Apr 13 19:26:13.944357 systemd[1]: Started sshd@19-178.105.8.180:22-50.85.169.122:45156.service - OpenSSH per-connection server daemon (50.85.169.122:45156). Apr 13 19:26:14.061520 sshd[4351]: Accepted publickey for core from 50.85.169.122 port 45156 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:14.064827 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:14.070736 systemd-logind[1571]: New session 20 of user core. Apr 13 19:26:14.080473 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:14.252647 sshd[4351]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:14.263637 systemd[1]: sshd@19-178.105.8.180:22-50.85.169.122:45156.service: Deactivated successfully. Apr 13 19:26:14.268674 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:14.271593 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:14.274825 systemd-logind[1571]: Removed session 20. Apr 13 19:26:19.277893 systemd[1]: Started sshd@20-178.105.8.180:22-50.85.169.122:45166.service - OpenSSH per-connection server daemon (50.85.169.122:45166). Apr 13 19:26:19.413388 sshd[4365]: Accepted publickey for core from 50.85.169.122 port 45166 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:19.414796 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:19.422016 systemd-logind[1571]: New session 21 of user core. Apr 13 19:26:19.425400 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:19.599938 sshd[4365]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:19.607546 systemd[1]: sshd@20-178.105.8.180:22-50.85.169.122:45166.service: Deactivated successfully. Apr 13 19:26:19.611260 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:19.611434 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:19.613508 systemd-logind[1571]: Removed session 21. Apr 13 19:26:24.625398 systemd[1]: Started sshd@21-178.105.8.180:22-50.85.169.122:54366.service - OpenSSH per-connection server daemon (50.85.169.122:54366). Apr 13 19:26:24.748729 sshd[4379]: Accepted publickey for core from 50.85.169.122 port 54366 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:24.752201 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:24.757414 systemd-logind[1571]: New session 22 of user core. Apr 13 19:26:24.764811 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:24.938345 sshd[4379]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:24.945239 systemd[1]: sshd@21-178.105.8.180:22-50.85.169.122:54366.service: Deactivated successfully. Apr 13 19:26:24.948959 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:24.949202 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:24.955282 systemd-logind[1571]: Removed session 22. Apr 13 19:26:24.961716 systemd[1]: Started sshd@22-178.105.8.180:22-50.85.169.122:54380.service - OpenSSH per-connection server daemon (50.85.169.122:54380). Apr 13 19:26:25.076557 sshd[4392]: Accepted publickey for core from 50.85.169.122 port 54380 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:25.078635 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:25.084431 systemd-logind[1571]: New session 23 of user core. Apr 13 19:26:25.092479 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:26:28.049629 containerd[1605]: time="2026-04-13T19:26:28.049506355Z" level=info msg="StopContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" with timeout 30 (s)" Apr 13 19:26:28.053687 containerd[1605]: time="2026-04-13T19:26:28.053523616Z" level=info msg="Stop container \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" with signal terminated" Apr 13 19:26:28.067243 containerd[1605]: time="2026-04-13T19:26:28.067162117Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:26:28.079394 containerd[1605]: time="2026-04-13T19:26:28.079351821Z" level=info msg="StopContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" with timeout 2 (s)" Apr 13 19:26:28.079725 containerd[1605]: time="2026-04-13T19:26:28.079695950Z" level=info msg="Stop container \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" with signal terminated" Apr 13 19:26:28.088964 systemd-networkd[1249]: lxc_health: Link DOWN Apr 13 19:26:28.088972 systemd-networkd[1249]: lxc_health: Lost carrier Apr 13 19:26:28.113147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5-rootfs.mount: Deactivated successfully. Apr 13 19:26:28.123009 containerd[1605]: time="2026-04-13T19:26:28.122794708Z" level=info msg="shim disconnected" id=9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5 namespace=k8s.io Apr 13 19:26:28.123009 containerd[1605]: time="2026-04-13T19:26:28.122861989Z" level=warning msg="cleaning up after shim disconnected" id=9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5 namespace=k8s.io Apr 13 19:26:28.123009 containerd[1605]: time="2026-04-13T19:26:28.122876030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:28.147521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b-rootfs.mount: Deactivated successfully. Apr 13 19:26:28.149097 containerd[1605]: time="2026-04-13T19:26:28.147958137Z" level=info msg="StopContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" returns successfully" Apr 13 19:26:28.149673 containerd[1605]: time="2026-04-13T19:26:28.149557377Z" level=info msg="StopPodSandbox for \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\"" Apr 13 19:26:28.149673 containerd[1605]: time="2026-04-13T19:26:28.149597218Z" level=info msg="Container to stop \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.151493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7-shm.mount: Deactivated successfully. Apr 13 19:26:28.153926 containerd[1605]: time="2026-04-13T19:26:28.153873045Z" level=info msg="shim disconnected" id=6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b namespace=k8s.io Apr 13 19:26:28.154004 containerd[1605]: time="2026-04-13T19:26:28.153936527Z" level=warning msg="cleaning up after shim disconnected" id=6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b namespace=k8s.io Apr 13 19:26:28.154004 containerd[1605]: time="2026-04-13T19:26:28.153946167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:28.189506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7-rootfs.mount: Deactivated successfully. Apr 13 19:26:28.190098 containerd[1605]: time="2026-04-13T19:26:28.189827064Z" level=info msg="StopContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" returns successfully" Apr 13 19:26:28.192288 containerd[1605]: time="2026-04-13T19:26:28.191758152Z" level=info msg="StopPodSandbox for \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\"" Apr 13 19:26:28.192467 containerd[1605]: time="2026-04-13T19:26:28.192441650Z" level=info msg="Container to stop \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.192536 containerd[1605]: time="2026-04-13T19:26:28.192519411Z" level=info msg="Container to stop \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.192605 containerd[1605]: time="2026-04-13T19:26:28.192590253Z" level=info msg="Container to stop \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.192662 containerd[1605]: time="2026-04-13T19:26:28.192649615Z" level=info msg="Container to stop \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.193190 containerd[1605]: time="2026-04-13T19:26:28.192699896Z" level=info msg="Container to stop \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:28.197946 containerd[1605]: time="2026-04-13T19:26:28.197829344Z" level=info msg="shim disconnected" id=bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7 namespace=k8s.io Apr 13 19:26:28.198753 containerd[1605]: time="2026-04-13T19:26:28.198506201Z" level=warning msg="cleaning up after shim disconnected" id=bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7 namespace=k8s.io Apr 13 19:26:28.198753 containerd[1605]: time="2026-04-13T19:26:28.198545282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:28.219010 containerd[1605]: time="2026-04-13T19:26:28.218967753Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:26:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:26:28.220475 containerd[1605]: time="2026-04-13T19:26:28.220286946Z" level=info msg="TearDown network for sandbox \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\" successfully" Apr 13 19:26:28.220475 containerd[1605]: time="2026-04-13T19:26:28.220318627Z" level=info msg="StopPodSandbox for \"bc7cbf17d9054f6ce69366ee0b9d6c84311b6b0f9cd76912582c382f7fc4ded7\" returns successfully" Apr 13 19:26:28.236545 containerd[1605]: time="2026-04-13T19:26:28.236472671Z" level=info msg="shim disconnected" id=83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78 namespace=k8s.io Apr 13 19:26:28.237123 containerd[1605]: time="2026-04-13T19:26:28.236781918Z" level=warning msg="cleaning up after shim disconnected" id=83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78 namespace=k8s.io Apr 13 19:26:28.237123 containerd[1605]: time="2026-04-13T19:26:28.236799679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:28.250456 containerd[1605]: time="2026-04-13T19:26:28.250411379Z" level=info msg="TearDown network for sandbox \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" successfully" Apr 13 19:26:28.250620 containerd[1605]: time="2026-04-13T19:26:28.250605344Z" level=info msg="StopPodSandbox for \"83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78\" returns successfully" Apr 13 19:26:28.382341 kubelet[2778]: I0413 19:26:28.382187 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dc0b252-8426-470d-b95c-77b0da19e18d-clustermesh-secrets\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.385386 kubelet[2778]: I0413 19:26:28.385228 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cni-path\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385618 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-cilium-config-path\") pod \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\" (UID: \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385699 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-config-path\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385738 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-bpf-maps\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385772 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-hostproc\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385803 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-cgroup\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387055 kubelet[2778]: I0413 19:26:28.385852 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q84kv\" (UniqueName: \"kubernetes.io/projected/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-kube-api-access-q84kv\") pod \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\" (UID: \"c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.385888 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-etc-cni-netd\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.385922 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-kernel\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.385961 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-hubble-tls\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.386000 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqkd\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.386055 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-lib-modules\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387480 kubelet[2778]: I0413 19:26:28.386103 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-xtables-lock\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387808 kubelet[2778]: I0413 19:26:28.386136 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-net\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387808 kubelet[2778]: I0413 19:26:28.386168 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-run\") pod \"0dc0b252-8426-470d-b95c-77b0da19e18d\" (UID: \"0dc0b252-8426-470d-b95c-77b0da19e18d\") " Apr 13 19:26:28.387808 kubelet[2778]: I0413 19:26:28.386277 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.387808 kubelet[2778]: I0413 19:26:28.386338 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cni-path" (OuterVolumeSpecName: "cni-path") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.390460 kubelet[2778]: I0413 19:26:28.390417 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06" (UID: "c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:28.390714 kubelet[2778]: I0413 19:26:28.390681 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.390813 kubelet[2778]: I0413 19:26:28.390800 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.390886 kubelet[2778]: I0413 19:26:28.390875 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-hostproc" (OuterVolumeSpecName: "hostproc") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.390960 kubelet[2778]: I0413 19:26:28.390949 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.391697 kubelet[2778]: I0413 19:26:28.391662 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.391771 kubelet[2778]: I0413 19:26:28.391719 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.393876 kubelet[2778]: I0413 19:26:28.393842 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.393946 kubelet[2778]: I0413 19:26:28.393881 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:28.395197 kubelet[2778]: I0413 19:26:28.395108 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:28.395197 kubelet[2778]: I0413 19:26:28.395117 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-kube-api-access-q84kv" (OuterVolumeSpecName: "kube-api-access-q84kv") pod "c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06" (UID: "c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06"). InnerVolumeSpecName "kube-api-access-q84kv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:28.395995 kubelet[2778]: I0413 19:26:28.395648 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc0b252-8426-470d-b95c-77b0da19e18d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:26:28.396135 kubelet[2778]: I0413 19:26:28.396114 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:28.397072 kubelet[2778]: I0413 19:26:28.397047 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd" (OuterVolumeSpecName: "kube-api-access-vwqkd") pod "0dc0b252-8426-470d-b95c-77b0da19e18d" (UID: "0dc0b252-8426-470d-b95c-77b0da19e18d"). InnerVolumeSpecName "kube-api-access-vwqkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:28.486952 kubelet[2778]: I0413 19:26:28.486897 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cni-path\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487398 kubelet[2778]: I0413 19:26:28.487253 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-cilium-config-path\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487398 kubelet[2778]: I0413 19:26:28.487358 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-config-path\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487627 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-bpf-maps\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487670 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-hostproc\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487714 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-cgroup\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487742 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q84kv\" (UniqueName: \"kubernetes.io/projected/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06-kube-api-access-q84kv\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487764 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-etc-cni-netd\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.487879 kubelet[2778]: I0413 19:26:28.487832 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-kernel\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488328 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-hubble-tls\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488367 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwqkd\" (UniqueName: \"kubernetes.io/projected/0dc0b252-8426-470d-b95c-77b0da19e18d-kube-api-access-vwqkd\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488441 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-lib-modules\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488488 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-xtables-lock\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488513 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-host-proc-sys-net\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488535 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dc0b252-8426-470d-b95c-77b0da19e18d-cilium-run\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.488611 kubelet[2778]: I0413 19:26:28.488571 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dc0b252-8426-470d-b95c-77b0da19e18d-clustermesh-secrets\") on node \"ci-4081-3-7-f-96a1162b98\" DevicePath \"\"" Apr 13 19:26:28.522165 kubelet[2778]: I0413 19:26:28.522125 2778 scope.go:117] "RemoveContainer" containerID="6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b" Apr 13 19:26:28.528059 containerd[1605]: time="2026-04-13T19:26:28.527915759Z" level=info msg="RemoveContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\"" Apr 13 19:26:28.535849 containerd[1605]: time="2026-04-13T19:26:28.535743755Z" level=info msg="RemoveContainer for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" returns successfully" Apr 13 19:26:28.536364 kubelet[2778]: I0413 19:26:28.536279 2778 scope.go:117] "RemoveContainer" containerID="10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686" Apr 13 19:26:28.539260 containerd[1605]: time="2026-04-13T19:26:28.539219402Z" level=info msg="RemoveContainer for \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\"" Apr 13 19:26:28.544021 containerd[1605]: time="2026-04-13T19:26:28.543786116Z" level=info msg="RemoveContainer for \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\" returns successfully" Apr 13 19:26:28.544183 kubelet[2778]: I0413 19:26:28.544161 2778 scope.go:117] "RemoveContainer" containerID="5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60" Apr 13 19:26:28.548736 containerd[1605]: time="2026-04-13T19:26:28.548306309Z" level=info msg="RemoveContainer for \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\"" Apr 13 19:26:28.554371 containerd[1605]: time="2026-04-13T19:26:28.554328659Z" level=info msg="RemoveContainer for \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\" returns successfully" Apr 13 19:26:28.554875 kubelet[2778]: I0413 19:26:28.554716 2778 scope.go:117] "RemoveContainer" containerID="4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8" Apr 13 19:26:28.556006 containerd[1605]: time="2026-04-13T19:26:28.555938060Z" level=info msg="RemoveContainer for \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\"" Apr 13 19:26:28.560952 containerd[1605]: time="2026-04-13T19:26:28.560902704Z" level=info msg="RemoveContainer for \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\" returns successfully" Apr 13 19:26:28.561463 kubelet[2778]: I0413 19:26:28.561422 2778 scope.go:117] "RemoveContainer" containerID="53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a" Apr 13 19:26:28.563247 containerd[1605]: time="2026-04-13T19:26:28.563212642Z" level=info msg="RemoveContainer for \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\"" Apr 13 19:26:28.567813 containerd[1605]: time="2026-04-13T19:26:28.567771916Z" level=info msg="RemoveContainer for \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\" returns successfully" Apr 13 19:26:28.568179 kubelet[2778]: I0413 19:26:28.568157 2778 scope.go:117] "RemoveContainer" containerID="6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b" Apr 13 19:26:28.568460 containerd[1605]: time="2026-04-13T19:26:28.568424492Z" level=error msg="ContainerStatus for \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\": not found" Apr 13 19:26:28.568575 kubelet[2778]: E0413 19:26:28.568554 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\": not found" containerID="6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b" Apr 13 19:26:28.568629 kubelet[2778]: I0413 19:26:28.568586 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b"} err="failed to get container status \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6392934fd7691f390f4deef4a5e441b6315eceeee616287e707d78a918558f6b\": not found" Apr 13 19:26:28.568629 kubelet[2778]: I0413 19:26:28.568626 2778 scope.go:117] "RemoveContainer" containerID="10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686" Apr 13 19:26:28.573046 containerd[1605]: time="2026-04-13T19:26:28.572107864Z" level=error msg="ContainerStatus for \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\": not found" Apr 13 19:26:28.573154 kubelet[2778]: E0413 19:26:28.572330 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\": not found" containerID="10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686" Apr 13 19:26:28.573154 kubelet[2778]: I0413 19:26:28.572364 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686"} err="failed to get container status \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\": rpc error: code = NotFound desc = an error occurred when try to find container \"10f10509602ab8628478d1ed98a9f2f8d483e856da5571585eac64034ac3d686\": not found" Apr 13 19:26:28.573154 kubelet[2778]: I0413 19:26:28.572387 2778 scope.go:117] "RemoveContainer" containerID="5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60" Apr 13 19:26:28.574254 containerd[1605]: time="2026-04-13T19:26:28.574202796Z" level=error msg="ContainerStatus for \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\": not found" Apr 13 19:26:28.574665 kubelet[2778]: E0413 19:26:28.574489 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\": not found" containerID="5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60" Apr 13 19:26:28.574720 kubelet[2778]: I0413 19:26:28.574673 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60"} err="failed to get container status \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e4f2200063097b8d34179deab265bc9f29baca7a9d280dc6cf4c203ca7a1f60\": not found" Apr 13 19:26:28.574720 kubelet[2778]: I0413 19:26:28.574694 2778 scope.go:117] "RemoveContainer" containerID="4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8" Apr 13 19:26:28.576177 containerd[1605]: time="2026-04-13T19:26:28.576142485Z" level=error msg="ContainerStatus for \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\": not found" Apr 13 19:26:28.576438 kubelet[2778]: E0413 19:26:28.576376 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\": not found" containerID="4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8" Apr 13 19:26:28.576500 kubelet[2778]: I0413 19:26:28.576455 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8"} err="failed to get container status \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f8f03d43204142e93412f17b850617c18130de6ee1d51d11a3690397c541ea8\": not found" Apr 13 19:26:28.576500 kubelet[2778]: I0413 19:26:28.576478 2778 scope.go:117] "RemoveContainer" containerID="53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a" Apr 13 19:26:28.576853 containerd[1605]: time="2026-04-13T19:26:28.576775781Z" level=error msg="ContainerStatus for \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\": not found" Apr 13 19:26:28.577047 kubelet[2778]: E0413 19:26:28.577003 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\": not found" containerID="53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a" Apr 13 19:26:28.577134 kubelet[2778]: I0413 19:26:28.577116 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a"} err="failed to get container status \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"53ef11c467d9e28ab008e1f664e39cf68c07046eac95fcbb91c4612f9e884b9a\": not found" Apr 13 19:26:28.577197 kubelet[2778]: I0413 19:26:28.577186 2778 scope.go:117] "RemoveContainer" containerID="9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5" Apr 13 19:26:28.578451 containerd[1605]: time="2026-04-13T19:26:28.578320379Z" level=info msg="RemoveContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\"" Apr 13 19:26:28.582724 containerd[1605]: time="2026-04-13T19:26:28.581912269Z" level=info msg="RemoveContainer for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" returns successfully" Apr 13 19:26:28.582724 containerd[1605]: time="2026-04-13T19:26:28.582542325Z" level=error msg="ContainerStatus for \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\": not found" Apr 13 19:26:28.582861 kubelet[2778]: I0413 19:26:28.582261 2778 scope.go:117] "RemoveContainer" containerID="9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5" Apr 13 19:26:28.582861 kubelet[2778]: E0413 19:26:28.582684 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\": not found" containerID="9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5" Apr 13 19:26:28.582861 kubelet[2778]: I0413 19:26:28.582738 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5"} err="failed to get container status \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a5ba0f94677deac5c9d7f4a7a9f02c6e00829ae85eecc2f0d8a2cccd661a3a5\": not found" Apr 13 19:26:28.950560 kubelet[2778]: I0413 19:26:28.950507 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc0b252-8426-470d-b95c-77b0da19e18d" path="/var/lib/kubelet/pods/0dc0b252-8426-470d-b95c-77b0da19e18d/volumes" Apr 13 19:26:28.951695 kubelet[2778]: I0413 19:26:28.951662 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06" path="/var/lib/kubelet/pods/c6b23f0e-d74b-4e3d-b899-ca9c8a9a7e06/volumes" Apr 13 19:26:29.053649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78-rootfs.mount: Deactivated successfully. Apr 13 19:26:29.053803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83da1c4511ff2e91433e54b347e1cb3791d38500d0a8729fc65b1d1bbf907a78-shm.mount: Deactivated successfully. Apr 13 19:26:29.053899 systemd[1]: var-lib-kubelet-pods-c6b23f0e\x2dd74b\x2d4e3d\x2db899\x2dca9c8a9a7e06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq84kv.mount: Deactivated successfully. Apr 13 19:26:29.053977 systemd[1]: var-lib-kubelet-pods-0dc0b252\x2d8426\x2d470d\x2db95c\x2d77b0da19e18d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwqkd.mount: Deactivated successfully. Apr 13 19:26:29.054078 systemd[1]: var-lib-kubelet-pods-0dc0b252\x2d8426\x2d470d\x2db95c\x2d77b0da19e18d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:26:29.054158 systemd[1]: var-lib-kubelet-pods-0dc0b252\x2d8426\x2d470d\x2db95c\x2d77b0da19e18d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:26:29.989157 sshd[4392]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:29.995405 systemd[1]: sshd@22-178.105.8.180:22-50.85.169.122:54380.service: Deactivated successfully. Apr 13 19:26:30.007259 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:26:30.010652 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:26:30.023789 systemd[1]: Started sshd@23-178.105.8.180:22-50.85.169.122:49308.service - OpenSSH per-connection server daemon (50.85.169.122:49308). Apr 13 19:26:30.024948 systemd-logind[1571]: Removed session 23. Apr 13 19:26:30.136799 sshd[4562]: Accepted publickey for core from 50.85.169.122 port 49308 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:30.139872 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:30.147889 systemd-logind[1571]: New session 24 of user core. Apr 13 19:26:30.155726 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:26:32.092352 kubelet[2778]: E0413 19:26:32.092187 2778 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:26:32.306077 sshd[4562]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:32.316992 systemd[1]: sshd@23-178.105.8.180:22-50.85.169.122:49308.service: Deactivated successfully. Apr 13 19:26:32.325630 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:26:32.339395 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:26:32.348363 systemd[1]: Started sshd@24-178.105.8.180:22-50.85.169.122:49312.service - OpenSSH per-connection server daemon (50.85.169.122:49312). Apr 13 19:26:32.351009 systemd-logind[1571]: Removed session 24. Apr 13 19:26:32.420246 kubelet[2778]: I0413 19:26:32.420200 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-etc-cni-netd\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420246 kubelet[2778]: I0413 19:26:32.420252 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-lib-modules\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420274 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-xtables-lock\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420292 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-hostproc\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420309 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhvr\" (UniqueName: \"kubernetes.io/projected/7c3343ff-b18f-40bb-aed8-df0d5471b208-kube-api-access-8zhvr\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420328 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-bpf-maps\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420343 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-cilium-run\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.420438 kubelet[2778]: I0413 19:26:32.420359 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-cilium-cgroup\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421169 kubelet[2778]: I0413 19:26:32.420375 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c3343ff-b18f-40bb-aed8-df0d5471b208-cilium-ipsec-secrets\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421169 kubelet[2778]: I0413 19:26:32.420389 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c3343ff-b18f-40bb-aed8-df0d5471b208-cilium-config-path\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421169 kubelet[2778]: I0413 19:26:32.420404 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-host-proc-sys-net\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421169 kubelet[2778]: I0413 19:26:32.420421 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c3343ff-b18f-40bb-aed8-df0d5471b208-clustermesh-secrets\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421169 kubelet[2778]: I0413 19:26:32.420439 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-host-proc-sys-kernel\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421285 kubelet[2778]: I0413 19:26:32.420470 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c3343ff-b18f-40bb-aed8-df0d5471b208-hubble-tls\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.421285 kubelet[2778]: I0413 19:26:32.420485 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c3343ff-b18f-40bb-aed8-df0d5471b208-cni-path\") pod \"cilium-pg9tb\" (UID: \"7c3343ff-b18f-40bb-aed8-df0d5471b208\") " pod="kube-system/cilium-pg9tb" Apr 13 19:26:32.501683 sshd[4575]: Accepted publickey for core from 50.85.169.122 port 49312 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:32.504700 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.510768 systemd-logind[1571]: New session 25 of user core. Apr 13 19:26:32.520763 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 19:26:32.629226 sshd[4575]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:32.634425 containerd[1605]: time="2026-04-13T19:26:32.634012471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pg9tb,Uid:7c3343ff-b18f-40bb-aed8-df0d5471b208,Namespace:kube-system,Attempt:0,}" Apr 13 19:26:32.636818 systemd[1]: sshd@24-178.105.8.180:22-50.85.169.122:49312.service: Deactivated successfully. Apr 13 19:26:32.642091 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Apr 13 19:26:32.642786 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 19:26:32.666156 systemd[1]: Started sshd@25-178.105.8.180:22-50.85.169.122:49320.service - OpenSSH per-connection server daemon (50.85.169.122:49320). Apr 13 19:26:32.667424 containerd[1605]: time="2026-04-13T19:26:32.664594698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:26:32.667424 containerd[1605]: time="2026-04-13T19:26:32.664700781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:26:32.667424 containerd[1605]: time="2026-04-13T19:26:32.664718181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:32.667424 containerd[1605]: time="2026-04-13T19:26:32.664818744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:32.667739 systemd-logind[1571]: Removed session 25. Apr 13 19:26:32.709386 containerd[1605]: time="2026-04-13T19:26:32.709072746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pg9tb,Uid:7c3343ff-b18f-40bb-aed8-df0d5471b208,Namespace:kube-system,Attempt:0,} returns sandbox id \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\"" Apr 13 19:26:32.718202 containerd[1605]: time="2026-04-13T19:26:32.718160288Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:26:32.734549 containerd[1605]: time="2026-04-13T19:26:32.733729628Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86144deeb5e69017d2286fdaf5e354c4e3567fdb5ce82310f6535c47baa01f1c\"" Apr 13 19:26:32.737103 containerd[1605]: time="2026-04-13T19:26:32.736718901Z" level=info msg="StartContainer for \"86144deeb5e69017d2286fdaf5e354c4e3567fdb5ce82310f6535c47baa01f1c\"" Apr 13 19:26:32.786772 sshd[4602]: Accepted publickey for core from 50.85.169.122 port 49320 ssh2: RSA SHA256:iZ69s7jdfZeZWl77uzTdj7kYKrt9+aDLOIz6i/Hnoms Apr 13 19:26:32.790735 sshd[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.795868 containerd[1605]: time="2026-04-13T19:26:32.794716959Z" level=info msg="StartContainer for \"86144deeb5e69017d2286fdaf5e354c4e3567fdb5ce82310f6535c47baa01f1c\" returns successfully" Apr 13 19:26:32.812272 systemd-logind[1571]: New session 26 of user core. Apr 13 19:26:32.814304 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 13 19:26:32.851698 containerd[1605]: time="2026-04-13T19:26:32.851393585Z" level=info msg="shim disconnected" id=86144deeb5e69017d2286fdaf5e354c4e3567fdb5ce82310f6535c47baa01f1c namespace=k8s.io Apr 13 19:26:32.851698 containerd[1605]: time="2026-04-13T19:26:32.851639671Z" level=warning msg="cleaning up after shim disconnected" id=86144deeb5e69017d2286fdaf5e354c4e3567fdb5ce82310f6535c47baa01f1c namespace=k8s.io Apr 13 19:26:32.852264 containerd[1605]: time="2026-04-13T19:26:32.851778794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:33.554002 containerd[1605]: time="2026-04-13T19:26:33.553873762Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:26:33.571899 containerd[1605]: time="2026-04-13T19:26:33.571176382Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85\"" Apr 13 19:26:33.577727 containerd[1605]: time="2026-04-13T19:26:33.575844576Z" level=info msg="StartContainer for \"5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85\"" Apr 13 19:26:33.630370 containerd[1605]: time="2026-04-13T19:26:33.630304820Z" level=info msg="StartContainer for \"5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85\" returns successfully" Apr 13 19:26:33.662278 containerd[1605]: time="2026-04-13T19:26:33.662201195Z" level=info msg="shim disconnected" id=5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85 namespace=k8s.io Apr 13 19:26:33.662278 containerd[1605]: time="2026-04-13T19:26:33.662259237Z" level=warning msg="cleaning up after shim disconnected" id=5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85 namespace=k8s.io Apr 13 19:26:33.662278 containerd[1605]: time="2026-04-13T19:26:33.662268877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:33.674000 containerd[1605]: time="2026-04-13T19:26:33.673931480Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:26:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:26:34.536196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ad9327e2fe1610665acb1f0eaf0b430b8bfb0fa05d3de115b1f549482a07a85-rootfs.mount: Deactivated successfully. Apr 13 19:26:34.557805 containerd[1605]: time="2026-04-13T19:26:34.557707130Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:26:34.575422 containerd[1605]: time="2026-04-13T19:26:34.575265595Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d\"" Apr 13 19:26:34.576194 containerd[1605]: time="2026-04-13T19:26:34.576125295Z" level=info msg="StartContainer for \"cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d\"" Apr 13 19:26:34.640655 containerd[1605]: time="2026-04-13T19:26:34.640006480Z" level=info msg="StartContainer for \"cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d\" returns successfully" Apr 13 19:26:34.667406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d-rootfs.mount: Deactivated successfully. Apr 13 19:26:34.672232 containerd[1605]: time="2026-04-13T19:26:34.672171337Z" level=info msg="shim disconnected" id=cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d namespace=k8s.io Apr 13 19:26:34.672232 containerd[1605]: time="2026-04-13T19:26:34.672229299Z" level=warning msg="cleaning up after shim disconnected" id=cc5f60e32ee2e4bc9f2279972393bde8cfce5fdf9fd1857daa6748648fe3665d namespace=k8s.io Apr 13 19:26:34.672232 containerd[1605]: time="2026-04-13T19:26:34.672237819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:35.568757 containerd[1605]: time="2026-04-13T19:26:35.568705378Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:26:35.589647 containerd[1605]: time="2026-04-13T19:26:35.589595880Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92\"" Apr 13 19:26:35.590268 containerd[1605]: time="2026-04-13T19:26:35.590247136Z" level=info msg="StartContainer for \"4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92\"" Apr 13 19:26:35.651823 containerd[1605]: time="2026-04-13T19:26:35.651427807Z" level=info msg="StartContainer for \"4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92\" returns successfully" Apr 13 19:26:35.670536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92-rootfs.mount: Deactivated successfully. Apr 13 19:26:35.677727 containerd[1605]: time="2026-04-13T19:26:35.677353070Z" level=info msg="shim disconnected" id=4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92 namespace=k8s.io Apr 13 19:26:35.677727 containerd[1605]: time="2026-04-13T19:26:35.677408311Z" level=warning msg="cleaning up after shim disconnected" id=4d834c999348a7a5c401c5b4136b531ffe520c7cb67e8ed9ddeabd4e5e1b6b92 namespace=k8s.io Apr 13 19:26:35.677727 containerd[1605]: time="2026-04-13T19:26:35.677417592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:36.572849 containerd[1605]: time="2026-04-13T19:26:36.571888626Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:26:36.596968 containerd[1605]: time="2026-04-13T19:26:36.596840382Z" level=info msg="CreateContainer within sandbox \"155a517a4dff518f75774306377388d2141e16378fd72a396167b37e5ea666e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18b9cbe1ad5544e1ab085374b075a528c96140d39badd07209b84f4c3d2f9887\"" Apr 13 19:26:36.597600 containerd[1605]: time="2026-04-13T19:26:36.597566240Z" level=info msg="StartContainer for \"18b9cbe1ad5544e1ab085374b075a528c96140d39badd07209b84f4c3d2f9887\"" Apr 13 19:26:36.653888 containerd[1605]: time="2026-04-13T19:26:36.653847066Z" level=info msg="StartContainer for \"18b9cbe1ad5544e1ab085374b075a528c96140d39badd07209b84f4c3d2f9887\" returns successfully" Apr 13 19:26:36.976092 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:26:39.974674 systemd-networkd[1249]: lxc_health: Link UP Apr 13 19:26:39.994324 systemd-networkd[1249]: lxc_health: Gained carrier Apr 13 19:26:40.663815 kubelet[2778]: I0413 19:26:40.663546 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pg9tb" podStartSLOduration=8.663509678 podStartE2EDuration="8.663509678s" podCreationTimestamp="2026-04-13 19:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:26:37.594830815 +0000 UTC m=+190.788402909" watchObservedRunningTime="2026-04-13 19:26:40.663509678 +0000 UTC m=+193.857081732" Apr 13 19:26:42.024294 systemd-networkd[1249]: lxc_health: Gained IPv6LL Apr 13 19:26:45.751168 kubelet[2778]: E0413 19:26:45.750767 2778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58540->127.0.0.1:46297: write tcp 127.0.0.1:58540->127.0.0.1:46297: write: broken pipe Apr 13 19:26:45.770780 sshd[4602]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:45.778647 systemd[1]: sshd@25-178.105.8.180:22-50.85.169.122:49320.service: Deactivated successfully. Apr 13 19:26:45.780154 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Apr 13 19:26:45.781986 systemd[1]: session-26.scope: Deactivated successfully. Apr 13 19:26:45.786665 systemd-logind[1571]: Removed session 26.