Apr 21 09:57:32.880727 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 21 09:57:32.880751 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 09:57:32.880762 kernel: KASLR enabled Apr 21 09:57:32.880767 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 09:57:32.880773 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 21 09:57:32.880779 kernel: random: crng init done Apr 21 09:57:32.880786 kernel: ACPI: Early table checksum verification disabled Apr 21 09:57:32.880792 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 21 09:57:32.880798 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 21 09:57:32.880805 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880812 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880817 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880823 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880830 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880837 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880845 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880852 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880858 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:32.880865 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 09:57:32.880871 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 21 09:57:32.880877 kernel: NUMA: Failed to initialise from firmware Apr 21 09:57:32.880884 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:32.880890 kernel: NUMA: NODE_DATA [mem 0x139671800-0x139676fff] Apr 21 09:57:32.880897 kernel: Zone ranges: Apr 21 09:57:32.880903 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 21 09:57:32.880911 kernel: DMA32 empty Apr 21 09:57:32.880917 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 21 09:57:32.880923 kernel: Movable zone start for each node Apr 21 09:57:32.880930 kernel: Early memory node ranges Apr 21 09:57:32.880936 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 21 09:57:32.880943 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 21 09:57:32.880949 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 21 09:57:32.880955 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 21 09:57:32.880962 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 21 09:57:32.880968 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 21 09:57:32.880975 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 21 09:57:32.880981 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:32.880989 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 09:57:32.880996 kernel: psci: probing for conduit method from ACPI. Apr 21 09:57:32.881002 kernel: psci: PSCIv1.1 detected in firmware. Apr 21 09:57:32.881011 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 09:57:32.881067 kernel: psci: Trusted OS migration not required Apr 21 09:57:32.881075 kernel: psci: SMC Calling Convention v1.1 Apr 21 09:57:32.881084 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 21 09:57:32.881091 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 09:57:32.881098 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 09:57:32.881105 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 09:57:32.881111 kernel: Detected PIPT I-cache on CPU0 Apr 21 09:57:32.881118 kernel: CPU features: detected: GIC system register CPU interface Apr 21 09:57:32.881125 kernel: CPU features: detected: Hardware dirty bit management Apr 21 09:57:32.881131 kernel: CPU features: detected: Spectre-v4 Apr 21 09:57:32.881138 kernel: CPU features: detected: Spectre-BHB Apr 21 09:57:32.881145 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 21 09:57:32.881153 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 21 09:57:32.881160 kernel: CPU features: detected: ARM erratum 1418040 Apr 21 09:57:32.881167 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 21 09:57:32.881173 kernel: alternatives: applying boot alternatives Apr 21 09:57:32.881181 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:32.881188 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 09:57:32.881195 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 09:57:32.881202 kernel: Fallback order for Node 0: 0 Apr 21 09:57:32.881209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 21 09:57:32.881215 kernel: Policy zone: Normal Apr 21 09:57:32.881222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 09:57:32.881230 kernel: software IO TLB: area num 2. Apr 21 09:57:32.881237 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 21 09:57:32.881244 kernel: Memory: 3882824K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213176K reserved, 0K cma-reserved) Apr 21 09:57:32.881250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 09:57:32.881257 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 09:57:32.881264 kernel: rcu: RCU event tracing is enabled. Apr 21 09:57:32.881272 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 09:57:32.881278 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 09:57:32.881285 kernel: Tracing variant of Tasks RCU enabled. Apr 21 09:57:32.881292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 09:57:32.881299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 09:57:32.881305 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 09:57:32.881313 kernel: GICv3: 256 SPIs implemented Apr 21 09:57:32.881320 kernel: GICv3: 0 Extended SPIs implemented Apr 21 09:57:32.881327 kernel: Root IRQ handler: gic_handle_irq Apr 21 09:57:32.881333 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 21 09:57:32.881340 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 21 09:57:32.881347 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 21 09:57:32.881354 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 21 09:57:32.881361 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 21 09:57:32.881372 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 21 09:57:32.881379 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 21 09:57:32.881386 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 09:57:32.881394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:32.881401 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 21 09:57:32.881408 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 21 09:57:32.881415 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 21 09:57:32.881422 kernel: Console: colour dummy device 80x25 Apr 21 09:57:32.881439 kernel: ACPI: Core revision 20230628 Apr 21 09:57:32.881448 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 21 09:57:32.881455 kernel: pid_max: default: 32768 minimum: 301 Apr 21 09:57:32.881462 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 09:57:32.881469 kernel: landlock: Up and running. Apr 21 09:57:32.881478 kernel: SELinux: Initializing. Apr 21 09:57:32.881485 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:32.881492 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:32.881499 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:32.881506 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:32.881513 kernel: rcu: Hierarchical SRCU implementation. Apr 21 09:57:32.881520 kernel: rcu: Max phase no-delay instances is 400. Apr 21 09:57:32.881527 kernel: Platform MSI: ITS@0x8080000 domain created Apr 21 09:57:32.881534 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 21 09:57:32.881543 kernel: Remapping and enabling EFI services. Apr 21 09:57:32.881549 kernel: smp: Bringing up secondary CPUs ... Apr 21 09:57:32.881556 kernel: Detected PIPT I-cache on CPU1 Apr 21 09:57:32.881563 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 21 09:57:32.881570 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 21 09:57:32.881577 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:32.881584 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 21 09:57:32.881591 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 09:57:32.881598 kernel: SMP: Total of 2 processors activated. Apr 21 09:57:32.881606 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 09:57:32.881614 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 21 09:57:32.881621 kernel: CPU features: detected: Common not Private translations Apr 21 09:57:32.881633 kernel: CPU features: detected: CRC32 instructions Apr 21 09:57:32.881642 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 21 09:57:32.881649 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 21 09:57:32.881657 kernel: CPU features: detected: LSE atomic instructions Apr 21 09:57:32.881664 kernel: CPU features: detected: Privileged Access Never Apr 21 09:57:32.881672 kernel: CPU features: detected: RAS Extension Support Apr 21 09:57:32.881681 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 21 09:57:32.881688 kernel: CPU: All CPU(s) started at EL1 Apr 21 09:57:32.881696 kernel: alternatives: applying system-wide alternatives Apr 21 09:57:32.881703 kernel: devtmpfs: initialized Apr 21 09:57:32.881710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 09:57:32.881718 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 09:57:32.881725 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 09:57:32.881732 kernel: SMBIOS 3.0.0 present. Apr 21 09:57:32.881741 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 21 09:57:32.881748 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 09:57:32.881756 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 09:57:32.881763 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 09:57:32.881771 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 09:57:32.881778 kernel: audit: initializing netlink subsys (disabled) Apr 21 09:57:32.881786 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Apr 21 09:57:32.881793 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 09:57:32.881801 kernel: cpuidle: using governor menu Apr 21 09:57:32.881809 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 09:57:32.881817 kernel: ASID allocator initialised with 32768 entries Apr 21 09:57:32.881824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 09:57:32.881832 kernel: Serial: AMBA PL011 UART driver Apr 21 09:57:32.881839 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 21 09:57:32.881847 kernel: Modules: 0 pages in range for non-PLT usage Apr 21 09:57:32.881854 kernel: Modules: 509008 pages in range for PLT usage Apr 21 09:57:32.881861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 09:57:32.881868 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 09:57:32.881877 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 09:57:32.881885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 09:57:32.881892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 09:57:32.881900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 09:57:32.881907 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 09:57:32.881915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 09:57:32.881922 kernel: ACPI: Added _OSI(Module Device) Apr 21 09:57:32.881930 kernel: ACPI: Added _OSI(Processor Device) Apr 21 09:57:32.881937 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 09:57:32.881947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 09:57:32.881954 kernel: ACPI: Interpreter enabled Apr 21 09:57:32.881962 kernel: ACPI: Using GIC for interrupt routing Apr 21 09:57:32.881969 kernel: ACPI: MCFG table detected, 1 entries Apr 21 09:57:32.881976 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 21 09:57:32.881984 kernel: printk: console [ttyAMA0] enabled Apr 21 09:57:32.881991 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 09:57:32.882161 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 09:57:32.882243 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 21 09:57:32.882309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 21 09:57:32.882372 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 21 09:57:32.882451 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 21 09:57:32.882462 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 21 09:57:32.882470 kernel: PCI host bridge to bus 0000:00 Apr 21 09:57:32.882545 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 21 09:57:32.882610 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 21 09:57:32.882668 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:32.882727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 09:57:32.882807 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 21 09:57:32.882885 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 21 09:57:32.882952 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 21 09:57:32.883070 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:32.883160 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.883231 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 21 09:57:32.883308 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.883375 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 21 09:57:32.883485 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.883567 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 21 09:57:32.883648 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.883715 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 21 09:57:32.883793 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.883860 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 21 09:57:32.883938 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.884005 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 21 09:57:32.884096 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.884164 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 21 09:57:32.884238 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.884306 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 21 09:57:32.884378 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:32.884459 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 21 09:57:32.884541 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 21 09:57:32.884610 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 21 09:57:32.884688 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:32.884758 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 21 09:57:32.884827 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:32.884896 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:32.884970 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 09:57:32.885132 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 21 09:57:32.885215 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 09:57:32.885287 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 21 09:57:32.885357 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 21 09:57:32.885489 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 09:57:32.885580 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 21 09:57:32.885673 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 09:57:32.885755 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 21 09:57:32.885824 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 21 09:57:32.885901 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 09:57:32.885971 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 21 09:57:32.886056 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:32.886139 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:32.886209 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 21 09:57:32.886278 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 21 09:57:32.886348 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:32.886420 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 21 09:57:32.886503 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:32.886571 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:32.886644 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 21 09:57:32.886711 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 21 09:57:32.886778 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 21 09:57:32.886848 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 21 09:57:32.886915 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:32.886981 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:32.887084 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 21 09:57:32.887159 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 21 09:57:32.887227 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 21 09:57:32.887297 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 21 09:57:32.887362 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:32.887426 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:32.887510 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 21 09:57:32.887577 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:32.887642 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:32.887716 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 09:57:32.887783 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:32.887848 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:32.887925 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 09:57:32.887994 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:32.888093 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:32.888164 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 09:57:32.888229 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:32.888297 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:32.888363 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 21 09:57:32.888456 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:32.888540 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 21 09:57:32.888608 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:32.888675 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 21 09:57:32.888745 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:32.888811 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 21 09:57:32.888878 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:32.888944 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 21 09:57:32.889013 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:32.891213 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 21 09:57:32.891284 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:32.891363 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 21 09:57:32.891467 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:32.891544 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 21 09:57:32.891612 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:32.891681 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 21 09:57:32.891747 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:32.891817 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 21 09:57:32.891887 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 21 09:57:32.891957 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 21 09:57:32.892058 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 09:57:32.892252 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 21 09:57:32.892369 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 09:57:32.892464 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 21 09:57:32.892530 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 09:57:32.894130 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 21 09:57:32.895223 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 21 09:57:32.895308 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 21 09:57:32.895378 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 21 09:57:32.895465 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 21 09:57:32.895534 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 21 09:57:32.895604 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 21 09:57:32.895672 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 21 09:57:32.895741 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 21 09:57:32.895816 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 21 09:57:32.895884 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 21 09:57:32.895950 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 21 09:57:32.896072 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 21 09:57:32.896161 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 21 09:57:32.896235 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:32.896302 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 21 09:57:32.896369 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 09:57:32.896454 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 21 09:57:32.896520 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 21 09:57:32.896584 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:32.896656 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 21 09:57:32.896727 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 09:57:32.896792 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 21 09:57:32.896857 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 21 09:57:32.896923 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:32.896996 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:32.897142 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 21 09:57:32.897215 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 09:57:32.897280 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 21 09:57:32.897358 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 21 09:57:32.897423 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:32.897551 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:32.897643 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 09:57:32.897708 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 21 09:57:32.897772 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 21 09:57:32.897840 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:32.897952 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 21 09:57:32.898065 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 21 09:57:32.898148 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 09:57:32.898220 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 21 09:57:32.898291 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 21 09:57:32.898362 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:32.898480 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 21 09:57:32.898573 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 21 09:57:32.898652 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 09:57:32.898730 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 21 09:57:32.898802 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 21 09:57:32.898876 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:32.898960 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 21 09:57:32.900161 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 21 09:57:32.900260 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 21 09:57:32.900334 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 09:57:32.900404 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 21 09:57:32.900507 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 21 09:57:32.900584 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:32.900655 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 09:57:32.900723 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 21 09:57:32.900788 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 21 09:57:32.900856 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:32.900925 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 09:57:32.900990 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 21 09:57:32.901078 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 21 09:57:32.901146 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:32.901215 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 21 09:57:32.901276 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 21 09:57:32.901335 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:32.901409 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 21 09:57:32.901482 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 21 09:57:32.901551 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:32.901621 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 21 09:57:32.901682 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 21 09:57:32.901741 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:32.901808 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 21 09:57:32.901868 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 21 09:57:32.901931 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:32.901999 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 21 09:57:32.902071 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 21 09:57:32.902146 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:32.902215 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 21 09:57:32.902276 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 21 09:57:32.902336 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:32.902407 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 21 09:57:32.902536 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 21 09:57:32.902615 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:32.902703 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 21 09:57:32.902771 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 21 09:57:32.902832 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:32.902900 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 21 09:57:32.902962 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 21 09:57:32.903036 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:32.903107 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 21 09:57:32.903172 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 21 09:57:32.903236 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:32.903246 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 21 09:57:32.903254 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 21 09:57:32.903262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 21 09:57:32.903270 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 21 09:57:32.903278 kernel: iommu: Default domain type: Translated Apr 21 09:57:32.903287 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 09:57:32.903295 kernel: efivars: Registered efivars operations Apr 21 09:57:32.903302 kernel: vgaarb: loaded Apr 21 09:57:32.903312 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 09:57:32.903320 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 09:57:32.903328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 09:57:32.903336 kernel: pnp: PnP ACPI init Apr 21 09:57:32.903412 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 21 09:57:32.903423 kernel: pnp: PnP ACPI: found 1 devices Apr 21 09:57:32.903443 kernel: NET: Registered PF_INET protocol family Apr 21 09:57:32.903452 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 09:57:32.903464 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 09:57:32.903472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 09:57:32.903480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 09:57:32.903488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 09:57:32.903496 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 09:57:32.903504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:32.903512 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:32.903520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 09:57:32.903601 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 21 09:57:32.903616 kernel: PCI: CLS 0 bytes, default 64 Apr 21 09:57:32.903624 kernel: kvm [1]: HYP mode not available Apr 21 09:57:32.903631 kernel: Initialise system trusted keyrings Apr 21 09:57:32.903639 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 09:57:32.903647 kernel: Key type asymmetric registered Apr 21 09:57:32.903655 kernel: Asymmetric key parser 'x509' registered Apr 21 09:57:32.903663 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 09:57:32.903671 kernel: io scheduler mq-deadline registered Apr 21 09:57:32.903679 kernel: io scheduler kyber registered Apr 21 09:57:32.903688 kernel: io scheduler bfq registered Apr 21 09:57:32.903697 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 21 09:57:32.903770 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 21 09:57:32.903839 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 21 09:57:32.903907 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.903976 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 21 09:57:32.904069 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 21 09:57:32.904144 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.904216 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 21 09:57:32.904283 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 21 09:57:32.904349 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.904419 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 21 09:57:32.904540 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 21 09:57:32.904617 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.904690 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 21 09:57:32.904769 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 21 09:57:32.904837 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.904911 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 21 09:57:32.904979 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 21 09:57:32.905066 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.905138 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 21 09:57:32.905230 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 21 09:57:32.905310 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.905380 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 21 09:57:32.905465 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 21 09:57:32.905542 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.905553 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 21 09:57:32.905623 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 21 09:57:32.905690 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 21 09:57:32.905758 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:32.905768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 21 09:57:32.905780 kernel: ACPI: button: Power Button [PWRB] Apr 21 09:57:32.905789 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 21 09:57:32.905862 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 21 09:57:32.905936 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 21 09:57:32.905954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 09:57:32.905962 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 21 09:57:32.906088 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 21 09:57:32.906103 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 21 09:57:32.906111 kernel: thunder_xcv, ver 1.0 Apr 21 09:57:32.906122 kernel: thunder_bgx, ver 1.0 Apr 21 09:57:32.906130 kernel: nicpf, ver 1.0 Apr 21 09:57:32.906138 kernel: nicvf, ver 1.0 Apr 21 09:57:32.906223 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 09:57:32.906287 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T09:57:32 UTC (1776765452) Apr 21 09:57:32.906298 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 09:57:32.906306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 21 09:57:32.906314 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 09:57:32.906324 kernel: watchdog: Hard watchdog permanently disabled Apr 21 09:57:32.906332 kernel: NET: Registered PF_INET6 protocol family Apr 21 09:57:32.906340 kernel: Segment Routing with IPv6 Apr 21 09:57:32.906348 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 09:57:32.906355 kernel: NET: Registered PF_PACKET protocol family Apr 21 09:57:32.906363 kernel: Key type dns_resolver registered Apr 21 09:57:32.906371 kernel: registered taskstats version 1 Apr 21 09:57:32.906378 kernel: Loading compiled-in X.509 certificates Apr 21 09:57:32.906387 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 09:57:32.906396 kernel: Key type .fscrypt registered Apr 21 09:57:32.906404 kernel: Key type fscrypt-provisioning registered Apr 21 09:57:32.906412 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 09:57:32.906419 kernel: ima: Allocated hash algorithm: sha1 Apr 21 09:57:32.906436 kernel: ima: No architecture policies found Apr 21 09:57:32.906445 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 09:57:32.906453 kernel: clk: Disabling unused clocks Apr 21 09:57:32.906460 kernel: Freeing unused kernel memory: 39424K Apr 21 09:57:32.906468 kernel: Run /init as init process Apr 21 09:57:32.906478 kernel: with arguments: Apr 21 09:57:32.906486 kernel: /init Apr 21 09:57:32.906494 kernel: with environment: Apr 21 09:57:32.906501 kernel: HOME=/ Apr 21 09:57:32.906509 kernel: TERM=linux Apr 21 09:57:32.906519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:32.906529 systemd[1]: Detected virtualization kvm. Apr 21 09:57:32.906538 systemd[1]: Detected architecture arm64. Apr 21 09:57:32.906547 systemd[1]: Running in initrd. Apr 21 09:57:32.906555 systemd[1]: No hostname configured, using default hostname. Apr 21 09:57:32.906563 systemd[1]: Hostname set to . Apr 21 09:57:32.906572 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:32.906580 systemd[1]: Queued start job for default target initrd.target. Apr 21 09:57:32.906588 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:32.906597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:32.906606 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 09:57:32.906616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:32.906624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 09:57:32.906633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 09:57:32.906643 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 09:57:32.906652 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 09:57:32.906660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:32.906669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:32.906679 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:32.906689 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:32.906698 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:32.906706 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:32.906714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:32.906722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:32.906731 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 09:57:32.906739 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 09:57:32.906749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:32.906758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:32.906766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:32.906774 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:32.906782 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 09:57:32.906791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:32.906799 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 09:57:32.908186 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 09:57:32.908217 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:32.908234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:32.908243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:32.908252 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:32.908260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:32.908302 systemd-journald[238]: Collecting audit messages is disabled. Apr 21 09:57:32.908325 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 09:57:32.908335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 09:57:32.908344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 09:57:32.908354 kernel: Bridge firewalling registered Apr 21 09:57:32.908362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:32.908371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:32.908381 systemd-journald[238]: Journal started Apr 21 09:57:32.908400 systemd-journald[238]: Runtime Journal (/run/log/journal/57e067dfba7e4af6bf0213979f29197a) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:32.872245 systemd-modules-load[239]: Inserted module 'overlay' Apr 21 09:57:32.910558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:32.898899 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 21 09:57:32.914247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:32.915038 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:32.916033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:32.926357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:32.929750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:32.942312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:32.943120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:32.952167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:32.953936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:32.960271 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 09:57:32.966239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:32.973289 dracut-cmdline[274]: dracut-dracut-053 Apr 21 09:57:32.976746 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:33.007396 systemd-resolved[278]: Positive Trust Anchors: Apr 21 09:57:33.007414 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:33.007484 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:33.013406 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 21 09:57:33.014948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:33.016966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:33.087098 kernel: SCSI subsystem initialized Apr 21 09:57:33.090125 kernel: Loading iSCSI transport class v2.0-870. Apr 21 09:57:33.098065 kernel: iscsi: registered transport (tcp) Apr 21 09:57:33.112077 kernel: iscsi: registered transport (qla4xxx) Apr 21 09:57:33.112167 kernel: QLogic iSCSI HBA Driver Apr 21 09:57:33.159684 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:33.164195 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 09:57:33.186328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 09:57:33.186402 kernel: device-mapper: uevent: version 1.0.3 Apr 21 09:57:33.186413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 09:57:33.242092 kernel: raid6: neonx8 gen() 14872 MB/s Apr 21 09:57:33.259101 kernel: raid6: neonx4 gen() 13792 MB/s Apr 21 09:57:33.276089 kernel: raid6: neonx2 gen() 13167 MB/s Apr 21 09:57:33.293079 kernel: raid6: neonx1 gen() 10439 MB/s Apr 21 09:57:33.310094 kernel: raid6: int64x8 gen() 6930 MB/s Apr 21 09:57:33.327119 kernel: raid6: int64x4 gen() 7324 MB/s Apr 21 09:57:33.344095 kernel: raid6: int64x2 gen() 6051 MB/s Apr 21 09:57:33.361097 kernel: raid6: int64x1 gen() 5039 MB/s Apr 21 09:57:33.361184 kernel: raid6: using algorithm neonx8 gen() 14872 MB/s Apr 21 09:57:33.378090 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Apr 21 09:57:33.378173 kernel: raid6: using neon recovery algorithm Apr 21 09:57:33.383324 kernel: xor: measuring software checksum speed Apr 21 09:57:33.383391 kernel: 8regs : 17639 MB/sec Apr 21 09:57:33.383412 kernel: 32regs : 19669 MB/sec Apr 21 09:57:33.383451 kernel: arm64_neon : 27016 MB/sec Apr 21 09:57:33.384071 kernel: xor: using function: arm64_neon (27016 MB/sec) Apr 21 09:57:33.435060 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 09:57:33.450407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:33.457295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:33.472944 systemd-udevd[460]: Using default interface naming scheme 'v255'. Apr 21 09:57:33.477538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:33.486297 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 09:57:33.503986 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Apr 21 09:57:33.545092 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:33.551339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:33.603840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:33.616696 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 09:57:33.634321 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:33.637232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:33.637966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:33.640801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:33.649385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 09:57:33.665069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:33.708339 kernel: ACPI: bus type USB registered Apr 21 09:57:33.708399 kernel: usbcore: registered new interface driver usbfs Apr 21 09:57:33.709266 kernel: usbcore: registered new interface driver hub Apr 21 09:57:33.709297 kernel: usbcore: registered new device driver usb Apr 21 09:57:33.727574 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:33.727816 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 09:57:33.727907 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 09:57:33.730055 kernel: scsi host0: Virtio SCSI HBA Apr 21 09:57:33.731259 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:33.733149 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 09:57:33.733332 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 09:57:33.739109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:33.739247 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:33.745712 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:33.749400 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:33.749469 kernel: hub 1-0:1.0: USB hub found Apr 21 09:57:33.749684 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:33.746355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:33.751416 kernel: hub 1-0:1.0: 4 ports detected Apr 21 09:57:33.746527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:33.754345 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 09:57:33.750061 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:33.756299 kernel: hub 2-0:1.0: USB hub found Apr 21 09:57:33.757050 kernel: hub 2-0:1.0: 4 ports detected Apr 21 09:57:33.759066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:33.776059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:33.782229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:33.794522 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 21 09:57:33.796210 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 21 09:57:33.796471 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 09:57:33.799051 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 21 09:57:33.811411 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 21 09:57:33.814339 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 21 09:57:33.814590 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 21 09:57:33.814693 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 21 09:57:33.814780 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 09:57:33.819533 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 09:57:33.819597 kernel: GPT:17805311 != 80003071 Apr 21 09:57:33.819609 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 09:57:33.821139 kernel: GPT:17805311 != 80003071 Apr 21 09:57:33.821182 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 09:57:33.821193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:33.822032 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 21 09:57:33.824455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:33.870062 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (506) Apr 21 09:57:33.870118 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (522) Apr 21 09:57:33.872375 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 09:57:33.890355 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 09:57:33.896931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 09:57:33.897842 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 09:57:33.905596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:33.912361 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 09:57:33.921061 disk-uuid[575]: Primary Header is updated. Apr 21 09:57:33.921061 disk-uuid[575]: Secondary Entries is updated. Apr 21 09:57:33.921061 disk-uuid[575]: Secondary Header is updated. Apr 21 09:57:33.932051 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:33.937151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:33.944236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:33.994285 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 09:57:34.138042 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 21 09:57:34.138105 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 09:57:34.138286 kernel: usbcore: registered new interface driver usbhid Apr 21 09:57:34.138297 kernel: usbhid: USB HID core driver Apr 21 09:57:34.239060 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 21 09:57:34.368098 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 21 09:57:34.422162 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 21 09:57:34.944117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:34.945603 disk-uuid[577]: The operation has completed successfully. Apr 21 09:57:35.002661 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 09:57:35.003580 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 09:57:35.014247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 09:57:35.032137 sh[594]: Success Apr 21 09:57:35.047075 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 09:57:35.102710 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 09:57:35.119177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 09:57:35.120864 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 09:57:35.137637 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 09:57:35.137712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:35.137730 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 09:57:35.137746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 09:57:35.138431 kernel: BTRFS info (device dm-0): using free space tree Apr 21 09:57:35.145061 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 09:57:35.147755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 09:57:35.149902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 09:57:35.161393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 09:57:35.166506 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 09:57:35.179221 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:35.179282 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:35.179296 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:35.186343 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:35.186414 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:35.199163 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:35.199172 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 09:57:35.207129 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 09:57:35.216289 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 09:57:35.319870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:35.324584 ignition[678]: Ignition 2.19.0 Apr 21 09:57:35.324759 ignition[678]: Stage: fetch-offline Apr 21 09:57:35.327407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:35.324804 ignition[678]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.328473 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:35.324817 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.325009 ignition[678]: parsed url from cmdline: "" Apr 21 09:57:35.325013 ignition[678]: no config URL provided Apr 21 09:57:35.325044 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:35.325054 ignition[678]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:35.325059 ignition[678]: failed to fetch config: resource requires networking Apr 21 09:57:35.325445 ignition[678]: Ignition finished successfully Apr 21 09:57:35.352603 systemd-networkd[787]: lo: Link UP Apr 21 09:57:35.352615 systemd-networkd[787]: lo: Gained carrier Apr 21 09:57:35.354229 systemd-networkd[787]: Enumeration completed Apr 21 09:57:35.354781 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:35.354784 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:35.355228 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:35.355738 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:35.355742 systemd-networkd[787]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:35.356410 systemd-networkd[787]: eth0: Link UP Apr 21 09:57:35.356414 systemd-networkd[787]: eth0: Gained carrier Apr 21 09:57:35.356440 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:35.357675 systemd[1]: Reached target network.target - Network. Apr 21 09:57:35.361983 systemd-networkd[787]: eth1: Link UP Apr 21 09:57:35.361987 systemd-networkd[787]: eth1: Gained carrier Apr 21 09:57:35.361998 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:35.364248 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 09:57:35.380995 ignition[790]: Ignition 2.19.0 Apr 21 09:57:35.381030 ignition[790]: Stage: fetch Apr 21 09:57:35.381252 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.381262 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.381360 ignition[790]: parsed url from cmdline: "" Apr 21 09:57:35.381363 ignition[790]: no config URL provided Apr 21 09:57:35.381368 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:35.381375 ignition[790]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:35.381398 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 09:57:35.382092 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 09:57:35.402134 systemd-networkd[787]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:35.421138 systemd-networkd[787]: eth0: DHCPv4 address 178.104.211.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:35.582291 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 09:57:35.589271 ignition[790]: GET result: OK Apr 21 09:57:35.589410 ignition[790]: parsing config with SHA512: af68792f945f5494663e96199eac6170fbcb5b1c4d8b29587c9ea833d8caf81ed215266fa674d52b23fc03f7159a838b3b7209f8e9a21d157ba9e76025d76ed0 Apr 21 09:57:35.598232 unknown[790]: fetched base config from "system" Apr 21 09:57:35.598251 unknown[790]: fetched base config from "system" Apr 21 09:57:35.598714 ignition[790]: fetch: fetch complete Apr 21 09:57:35.598256 unknown[790]: fetched user config from "hetzner" Apr 21 09:57:35.598722 ignition[790]: fetch: fetch passed Apr 21 09:57:35.598773 ignition[790]: Ignition finished successfully Apr 21 09:57:35.604104 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 09:57:35.611487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 09:57:35.626486 ignition[798]: Ignition 2.19.0 Apr 21 09:57:35.626497 ignition[798]: Stage: kargs Apr 21 09:57:35.626677 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.626687 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.627831 ignition[798]: kargs: kargs passed Apr 21 09:57:35.627890 ignition[798]: Ignition finished successfully Apr 21 09:57:35.630744 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 09:57:35.639334 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 09:57:35.653156 ignition[805]: Ignition 2.19.0 Apr 21 09:57:35.654248 ignition[805]: Stage: disks Apr 21 09:57:35.654578 ignition[805]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.654593 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.655648 ignition[805]: disks: disks passed Apr 21 09:57:35.655704 ignition[805]: Ignition finished successfully Apr 21 09:57:35.657797 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 09:57:35.659478 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:35.660788 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 09:57:35.661559 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:35.662115 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:35.662679 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:35.674594 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 09:57:35.695715 systemd-fsck[813]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 09:57:35.702125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 09:57:35.708221 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 09:57:35.768122 kernel: EXT4-fs (sda9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 09:57:35.768410 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 09:57:35.769708 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 09:57:35.778189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:35.783249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 09:57:35.788263 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 09:57:35.789012 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 09:57:35.789064 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:35.800920 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (821) Apr 21 09:57:35.800945 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:35.800956 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:35.800967 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:35.804781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 09:57:35.808163 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:35.808209 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:35.812002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:35.824309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 09:57:35.867326 coreos-metadata[823]: Apr 21 09:57:35.867 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 09:57:35.869319 coreos-metadata[823]: Apr 21 09:57:35.869 INFO Fetch successful Apr 21 09:57:35.870783 coreos-metadata[823]: Apr 21 09:57:35.870 INFO wrote hostname ci-4081-3-7-7-fa740892b3 to /sysroot/etc/hostname Apr 21 09:57:35.875814 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:35.892849 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 09:57:35.899893 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Apr 21 09:57:35.906110 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 09:57:35.911083 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 09:57:36.015231 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:36.021209 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 09:57:36.027250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 09:57:36.035127 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:36.059062 ignition[937]: INFO : Ignition 2.19.0 Apr 21 09:57:36.061129 ignition[937]: INFO : Stage: mount Apr 21 09:57:36.061129 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:36.061129 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:36.064104 ignition[937]: INFO : mount: mount passed Apr 21 09:57:36.064104 ignition[937]: INFO : Ignition finished successfully Apr 21 09:57:36.065749 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 09:57:36.066734 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 09:57:36.075242 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 09:57:36.138415 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 09:57:36.147229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:36.161086 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Apr 21 09:57:36.163318 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:36.163395 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:36.163436 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:36.167061 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:36.167187 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:36.171051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:36.192293 ignition[968]: INFO : Ignition 2.19.0 Apr 21 09:57:36.192293 ignition[968]: INFO : Stage: files Apr 21 09:57:36.193520 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:36.193520 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:36.195261 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Apr 21 09:57:36.196213 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 09:57:36.196213 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 09:57:36.201682 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 09:57:36.203053 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 09:57:36.204521 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 09:57:36.204408 unknown[968]: wrote ssh authorized keys file for user: core Apr 21 09:57:36.207677 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 09:57:36.207677 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 09:57:36.207677 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:36.207677 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 09:57:36.312135 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 09:57:36.562912 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:36.562912 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 09:57:36.565775 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 21 09:57:36.643254 systemd-networkd[787]: eth1: Gained IPv6LL Apr 21 09:57:36.707302 systemd-networkd[787]: eth0: Gained IPv6LL Apr 21 09:57:36.958559 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 21 09:57:37.326975 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 09:57:37.326975 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:37.326975 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:37.326975 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:37.331696 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 21 09:57:37.708860 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 21 09:57:39.396099 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 21 09:57:39.396099 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:39.401538 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:39.401538 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:39.401538 ignition[968]: INFO : files: files passed Apr 21 09:57:39.401538 ignition[968]: INFO : Ignition finished successfully Apr 21 09:57:39.403143 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 09:57:39.412238 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 09:57:39.418457 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 09:57:39.423369 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 09:57:39.423863 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 09:57:39.437129 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:39.437129 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:39.441681 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:39.442753 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:39.443846 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 09:57:39.449266 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 09:57:39.480682 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 09:57:39.480914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 09:57:39.482542 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 09:57:39.483857 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 09:57:39.485226 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 09:57:39.491296 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 09:57:39.504656 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:39.511227 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 09:57:39.525650 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:39.527452 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:39.528820 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 09:57:39.529948 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 09:57:39.530730 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:39.532529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 09:57:39.533162 systemd[1]: Stopped target basic.target - Basic System. Apr 21 09:57:39.534595 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 09:57:39.535927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:39.537158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:39.538294 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 09:57:39.539344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:39.540449 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 09:57:39.541582 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 09:57:39.542560 systemd[1]: Stopped target swap.target - Swaps. Apr 21 09:57:39.543434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 09:57:39.543613 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:39.544827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:39.545949 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:39.547008 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 09:57:39.547571 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:39.548373 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 09:57:39.548619 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:39.550111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 09:57:39.550285 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:39.551305 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 09:57:39.551463 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 09:57:39.552292 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 09:57:39.552474 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:39.558343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 09:57:39.558916 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 09:57:39.559110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:39.564444 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 09:57:39.565808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 09:57:39.566011 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:39.568809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 09:57:39.568964 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:39.578314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 09:57:39.578466 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 09:57:39.586383 ignition[1020]: INFO : Ignition 2.19.0 Apr 21 09:57:39.586383 ignition[1020]: INFO : Stage: umount Apr 21 09:57:39.589578 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:39.589578 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:39.589578 ignition[1020]: INFO : umount: umount passed Apr 21 09:57:39.589578 ignition[1020]: INFO : Ignition finished successfully Apr 21 09:57:39.592650 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 09:57:39.592862 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 09:57:39.597316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 09:57:39.597758 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 09:57:39.597798 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 09:57:39.598496 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 09:57:39.598533 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 09:57:39.599821 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 09:57:39.599865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 09:57:39.600529 systemd[1]: Stopped target network.target - Network. Apr 21 09:57:39.600982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 09:57:39.601044 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:39.601686 systemd[1]: Stopped target paths.target - Path Units. Apr 21 09:57:39.602601 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 09:57:39.606408 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:39.607853 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 09:57:39.609573 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 09:57:39.611156 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 09:57:39.611228 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:39.612821 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 09:57:39.612857 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:39.614007 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 09:57:39.614063 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 09:57:39.614971 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 09:57:39.615009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:39.616533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 09:57:39.617289 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 09:57:39.618200 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 09:57:39.618291 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 09:57:39.619395 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 09:57:39.619502 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:39.623701 systemd-networkd[787]: eth1: DHCPv6 lease lost Apr 21 09:57:39.627892 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 09:57:39.628039 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 09:57:39.629459 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 21 09:57:39.632316 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 09:57:39.633129 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 09:57:39.634689 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 09:57:39.634761 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:39.649243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 09:57:39.649934 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 09:57:39.650030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:39.652783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 09:57:39.652850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:39.653670 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 09:57:39.653715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:39.654724 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 09:57:39.654763 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:39.656844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:39.674601 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 09:57:39.674750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 09:57:39.678959 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 09:57:39.679165 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:39.682441 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 09:57:39.682497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:39.684164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 09:57:39.684202 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:39.684833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 09:57:39.684880 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:39.686625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 09:57:39.686684 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:39.688268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:39.688324 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:39.700507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 09:57:39.701702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 09:57:39.701806 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:39.703173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:39.703238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:39.711869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 09:57:39.711984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 09:57:39.715089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 09:57:39.721356 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 09:57:39.731374 systemd[1]: Switching root. Apr 21 09:57:39.772117 systemd-journald[238]: Journal stopped Apr 21 09:57:40.795367 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 21 09:57:40.796592 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 09:57:40.796612 kernel: SELinux: policy capability open_perms=1 Apr 21 09:57:40.796627 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 09:57:40.796637 kernel: SELinux: policy capability always_check_network=0 Apr 21 09:57:40.796646 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 09:57:40.796659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 09:57:40.796673 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 09:57:40.796683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 09:57:40.796693 systemd[1]: Successfully loaded SELinux policy in 35.209ms. Apr 21 09:57:40.796719 kernel: audit: type=1403 audit(1776765460.013:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 09:57:40.796731 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.259ms. Apr 21 09:57:40.796742 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:40.796753 systemd[1]: Detected virtualization kvm. Apr 21 09:57:40.796763 systemd[1]: Detected architecture arm64. Apr 21 09:57:40.796775 systemd[1]: Detected first boot. Apr 21 09:57:40.796786 systemd[1]: Hostname set to . Apr 21 09:57:40.796795 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:40.796806 zram_generator::config[1079]: No configuration found. Apr 21 09:57:40.797062 systemd[1]: Populated /etc with preset unit settings. Apr 21 09:57:40.797081 systemd[1]: Queued start job for default target multi-user.target. Apr 21 09:57:40.797092 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 09:57:40.797104 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 09:57:40.797153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 09:57:40.797187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 09:57:40.797218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 09:57:40.797229 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 09:57:40.797243 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 09:57:40.797253 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 09:57:40.797264 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 09:57:40.797274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:40.797286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:40.797299 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 09:57:40.797310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 09:57:40.797320 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 09:57:40.797331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:40.797341 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 21 09:57:40.797355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:40.797366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 09:57:40.797376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:40.797389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:40.797414 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:40.797428 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:40.797439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 09:57:40.797450 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 09:57:40.797460 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 09:57:40.797470 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 09:57:40.797484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:40.797497 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:40.797508 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:40.797519 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 09:57:40.797530 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 09:57:40.797541 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 09:57:40.797552 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 09:57:40.797563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 09:57:40.797578 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 09:57:40.797590 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 09:57:40.797601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 09:57:40.797611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:40.797622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:40.797633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 09:57:40.797654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:40.797669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:40.797679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:40.797690 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 09:57:40.797701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:40.797712 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 09:57:40.797723 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 09:57:40.797734 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 09:57:40.797751 kernel: fuse: init (API version 7.39) Apr 21 09:57:40.797765 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:40.797777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:40.797788 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 09:57:40.797799 kernel: loop: module loaded Apr 21 09:57:40.797810 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 09:57:40.797857 systemd-journald[1165]: Collecting audit messages is disabled. Apr 21 09:57:40.797884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:40.797897 kernel: ACPI: bus type drm_connector registered Apr 21 09:57:40.797908 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 09:57:40.797918 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 09:57:40.797931 systemd-journald[1165]: Journal started Apr 21 09:57:40.797953 systemd-journald[1165]: Runtime Journal (/run/log/journal/57e067dfba7e4af6bf0213979f29197a) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:40.802054 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:40.806821 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 09:57:40.809270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 09:57:40.810557 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 09:57:40.814502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 09:57:40.815516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:40.818249 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 09:57:40.818443 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 09:57:40.819530 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 09:57:40.820730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:40.820881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:40.822043 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:40.822193 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:40.825381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:40.825558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:40.826820 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 09:57:40.826966 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 09:57:40.827994 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:40.829793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:40.830895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:40.833388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 09:57:40.835490 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 09:57:40.848628 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 09:57:40.861249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 09:57:40.868165 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 09:57:40.870902 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 09:57:40.886523 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 09:57:40.906894 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 09:57:40.907748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:40.911270 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 09:57:40.912011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:40.918257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:40.926176 systemd-journald[1165]: Time spent on flushing to /var/log/journal/57e067dfba7e4af6bf0213979f29197a is 52.267ms for 1113 entries. Apr 21 09:57:40.926176 systemd-journald[1165]: System Journal (/var/log/journal/57e067dfba7e4af6bf0213979f29197a) is 8.0M, max 584.8M, 576.8M free. Apr 21 09:57:40.989130 systemd-journald[1165]: Received client request to flush runtime journal. Apr 21 09:57:40.930244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 09:57:40.935108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:40.936331 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 09:57:40.937460 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 09:57:40.950490 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 09:57:40.951588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 09:57:40.953926 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 09:57:40.979593 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 21 09:57:40.979605 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 21 09:57:40.990885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:40.993717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:40.998199 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 09:57:41.006450 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 09:57:41.008623 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 09:57:41.041857 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 09:57:41.047259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:41.063759 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 21 09:57:41.063799 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 21 09:57:41.070678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:41.474156 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 09:57:41.482478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:41.506774 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Apr 21 09:57:41.532128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:41.542264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:41.568350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 09:57:41.602930 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 21 09:57:41.631926 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 09:57:41.726945 systemd-networkd[1247]: lo: Link UP Apr 21 09:57:41.727761 systemd-networkd[1247]: lo: Gained carrier Apr 21 09:57:41.730141 systemd-networkd[1247]: Enumeration completed Apr 21 09:57:41.730373 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:41.731339 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.731452 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:41.733321 systemd-networkd[1247]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.733328 systemd-networkd[1247]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:41.733915 systemd-networkd[1247]: eth0: Link UP Apr 21 09:57:41.733918 systemd-networkd[1247]: eth0: Gained carrier Apr 21 09:57:41.733933 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.737349 systemd-networkd[1247]: eth1: Link UP Apr 21 09:57:41.737474 systemd-networkd[1247]: eth1: Gained carrier Apr 21 09:57:41.737536 systemd-networkd[1247]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.740117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 09:57:41.753857 systemd-networkd[1247]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.770036 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 09:57:41.793359 systemd-networkd[1247]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:41.799854 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 09:57:41.799881 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 21 09:57:41.800048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:41.818290 systemd-networkd[1247]: eth0: DHCPv4 address 178.104.211.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:41.820513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:41.823272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:41.834931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:41.838206 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 21 09:57:41.838273 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 09:57:41.838286 kernel: [drm] features: -context_init Apr 21 09:57:41.838629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 09:57:41.838720 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 09:57:41.839206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:41.839455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:41.841690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:41.841868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:41.843105 kernel: [drm] number of scanouts: 1 Apr 21 09:57:41.843184 kernel: [drm] number of cap sets: 0 Apr 21 09:57:41.849166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:41.854760 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:41.863363 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:41.863582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:41.864552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:41.868042 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 09:57:41.876514 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 09:57:41.884057 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 09:57:41.888060 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1254) Apr 21 09:57:41.933132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:41.940534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:42.006738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:42.028692 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 09:57:42.035287 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 09:57:42.061543 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:42.093814 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 09:57:42.095625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:42.104325 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 09:57:42.109046 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:42.138520 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 09:57:42.139896 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 09:57:42.140946 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 09:57:42.140974 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:42.142123 systemd[1]: Reached target machines.target - Containers. Apr 21 09:57:42.144313 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 09:57:42.158770 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 09:57:42.162197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 09:57:42.164918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:42.176379 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 09:57:42.182127 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 09:57:42.194733 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 09:57:42.196449 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 09:57:42.210421 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 09:57:42.216093 kernel: loop0: detected capacity change from 0 to 209336 Apr 21 09:57:42.228534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 09:57:42.233680 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 09:57:42.248515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 09:57:42.267049 kernel: loop1: detected capacity change from 0 to 114432 Apr 21 09:57:42.306068 kernel: loop2: detected capacity change from 0 to 114328 Apr 21 09:57:42.351061 kernel: loop3: detected capacity change from 0 to 8 Apr 21 09:57:42.386159 kernel: loop4: detected capacity change from 0 to 209336 Apr 21 09:57:42.404169 kernel: loop5: detected capacity change from 0 to 114432 Apr 21 09:57:42.418110 kernel: loop6: detected capacity change from 0 to 114328 Apr 21 09:57:42.439057 kernel: loop7: detected capacity change from 0 to 8 Apr 21 09:57:42.439871 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 09:57:42.440602 (sd-merge)[1336]: Merged extensions into '/usr'. Apr 21 09:57:42.445990 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 09:57:42.446006 systemd[1]: Reloading... Apr 21 09:57:42.531044 zram_generator::config[1364]: No configuration found. Apr 21 09:57:42.605286 ldconfig[1320]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 09:57:42.657156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:42.715914 systemd[1]: Reloading finished in 269 ms. Apr 21 09:57:42.730602 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 09:57:42.731726 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 09:57:42.739208 systemd[1]: Starting ensure-sysext.service... Apr 21 09:57:42.742235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:42.758216 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Apr 21 09:57:42.758252 systemd[1]: Reloading... Apr 21 09:57:42.785563 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 09:57:42.786241 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 09:57:42.789300 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 09:57:42.789768 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 21 09:57:42.789892 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Apr 21 09:57:42.792907 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:42.793140 systemd-tmpfiles[1409]: Skipping /boot Apr 21 09:57:42.804721 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:42.804852 systemd-tmpfiles[1409]: Skipping /boot Apr 21 09:57:42.852074 zram_generator::config[1438]: No configuration found. Apr 21 09:57:42.965610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:43.026233 systemd[1]: Reloading finished in 267 ms. Apr 21 09:57:43.041281 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:43.060224 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:43.070326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 09:57:43.077224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 09:57:43.085723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:43.090203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 09:57:43.103746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:43.107742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:43.115136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:43.123031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:43.125317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:43.135368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:43.136515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:43.139617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:43.149626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:43.151210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:43.151995 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 09:57:43.156341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:43.156772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:43.160863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:43.161045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:43.178475 systemd[1]: Finished ensure-sysext.service. Apr 21 09:57:43.183935 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 09:57:43.186910 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:43.187100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:43.188639 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:43.191302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:43.195759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:43.195857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:43.212517 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 09:57:43.220189 augenrules[1525]: No rules Apr 21 09:57:43.225305 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 09:57:43.227466 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:43.229529 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 09:57:43.234907 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 09:57:43.235570 systemd-networkd[1247]: eth1: Gained IPv6LL Apr 21 09:57:43.244626 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 09:57:43.247655 systemd-resolved[1487]: Positive Trust Anchors: Apr 21 09:57:43.247689 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:43.247725 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:43.257229 systemd-resolved[1487]: Using system hostname 'ci-4081-3-7-7-fa740892b3'. Apr 21 09:57:43.257658 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 09:57:43.260849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:43.261993 systemd[1]: Reached target network.target - Network. Apr 21 09:57:43.262558 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 09:57:43.263412 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:43.296599 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 09:57:43.298870 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:43.301680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 09:57:43.302532 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 09:57:43.303279 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 09:57:43.304080 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 09:57:43.304114 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:43.304609 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 09:57:43.305354 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 09:57:43.306086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 09:57:43.306757 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:43.308624 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 09:57:43.311050 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 09:57:43.313525 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 09:57:43.316546 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 09:57:43.317296 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:43.317946 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:43.318999 systemd[1]: System is tainted: cgroupsv1 Apr 21 09:57:43.319068 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:43.319095 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:43.322176 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 09:57:43.330350 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 09:57:43.339268 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 09:57:43.342219 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 09:57:43.345667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 09:57:43.347758 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 09:57:43.360267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:43.361560 coreos-metadata[1542]: Apr 21 09:57:43.361 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 09:57:43.364924 coreos-metadata[1542]: Apr 21 09:57:43.364 INFO Fetch successful Apr 21 09:57:43.365327 coreos-metadata[1542]: Apr 21 09:57:43.365 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 09:57:43.366689 jq[1547]: false Apr 21 09:57:43.369221 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 09:57:43.372930 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 09:57:43.375135 coreos-metadata[1542]: Apr 21 09:57:43.375 INFO Fetch successful Apr 21 09:57:43.381970 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 09:57:43.400298 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 09:57:43.402925 dbus-daemon[1544]: [system] SELinux support is enabled Apr 21 09:57:43.406524 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 09:57:43.421772 extend-filesystems[1548]: Found loop4 Apr 21 09:57:43.421772 extend-filesystems[1548]: Found loop5 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found loop6 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found loop7 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda1 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda2 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda3 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found usr Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda4 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda6 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda7 Apr 21 09:57:43.429129 extend-filesystems[1548]: Found sda9 Apr 21 09:57:43.429129 extend-filesystems[1548]: Checking size of /dev/sda9 Apr 21 09:57:43.422253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 09:57:43.437717 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 09:57:43.440073 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 09:57:43.450281 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 09:57:43.453996 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 09:57:43.458190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 09:57:43.472524 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 09:57:43.472756 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 09:57:43.480400 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 09:57:43.480638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 09:57:43.491567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 09:57:43.491812 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 09:57:43.499041 jq[1579]: true Apr 21 09:57:43.498191 systemd-networkd[1247]: eth0: Gained IPv6LL Apr 21 09:57:43.516681 systemd-timesyncd[1522]: Contacted time server 141.84.43.73:123 (0.flatcar.pool.ntp.org). Apr 21 09:57:43.516750 systemd-timesyncd[1522]: Initial clock synchronization to Tue 2026-04-21 09:57:43.501611 UTC. Apr 21 09:57:43.517240 extend-filesystems[1548]: Resized partition /dev/sda9 Apr 21 09:57:43.521733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 09:57:43.534290 extend-filesystems[1602]: resize2fs 1.47.1 (20-May-2024) Apr 21 09:57:43.538386 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 09:57:43.538473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 09:57:43.548199 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 09:57:43.548232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 09:57:43.556026 update_engine[1576]: I20260421 09:57:43.553679 1576 main.cc:92] Flatcar Update Engine starting Apr 21 09:57:43.562553 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 21 09:57:43.562607 tar[1586]: linux-arm64/LICENSE Apr 21 09:57:43.563475 tar[1586]: linux-arm64/helm Apr 21 09:57:43.564995 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 09:57:43.584478 systemd[1]: Started update-engine.service - Update Engine. Apr 21 09:57:43.588322 update_engine[1576]: I20260421 09:57:43.585414 1576 update_check_scheduler.cc:74] Next update check in 3m44s Apr 21 09:57:43.588476 jq[1601]: true Apr 21 09:57:43.588361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 09:57:43.590190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 09:57:43.674804 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 09:57:43.676898 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 09:57:43.746050 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1261) Apr 21 09:57:43.753947 systemd-logind[1571]: New seat seat0. Apr 21 09:57:43.771211 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 21 09:57:43.776723 extend-filesystems[1602]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 09:57:43.776723 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 21 09:57:43.776723 extend-filesystems[1602]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 21 09:57:43.800716 extend-filesystems[1548]: Resized filesystem in /dev/sda9 Apr 21 09:57:43.800716 extend-filesystems[1548]: Found sr0 Apr 21 09:57:43.804111 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:43.778906 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (Power Button) Apr 21 09:57:43.778925 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 21 09:57:43.779950 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 09:57:43.796923 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 09:57:43.797185 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 09:57:43.799687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 09:57:43.815375 systemd[1]: Starting sshkeys.service... Apr 21 09:57:43.896101 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 09:57:43.904620 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 09:57:43.957776 coreos-metadata[1651]: Apr 21 09:57:43.957 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 09:57:43.962648 coreos-metadata[1651]: Apr 21 09:57:43.961 INFO Fetch successful Apr 21 09:57:43.964997 unknown[1651]: wrote ssh authorized keys file for user: core Apr 21 09:57:44.008107 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:44.010719 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 09:57:44.021439 systemd[1]: Finished sshkeys.service. Apr 21 09:57:44.028944 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 09:57:44.073659 containerd[1603]: time="2026-04-21T09:57:44.073535124Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 09:57:44.136633 containerd[1603]: time="2026-04-21T09:57:44.136563649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.141998 containerd[1603]: time="2026-04-21T09:57:44.141687859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:44.141998 containerd[1603]: time="2026-04-21T09:57:44.141738172Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 09:57:44.141998 containerd[1603]: time="2026-04-21T09:57:44.141756755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 09:57:44.141998 containerd[1603]: time="2026-04-21T09:57:44.141971476Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 09:57:44.141998 containerd[1603]: time="2026-04-21T09:57:44.141991537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142220 containerd[1603]: time="2026-04-21T09:57:44.142103713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142220 containerd[1603]: time="2026-04-21T09:57:44.142118699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142814 containerd[1603]: time="2026-04-21T09:57:44.142776889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142814 containerd[1603]: time="2026-04-21T09:57:44.142808220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142886 containerd[1603]: time="2026-04-21T09:57:44.142824685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:44.142886 containerd[1603]: time="2026-04-21T09:57:44.142834955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.142930666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.143513606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.143985089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.144005430Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.144142103Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 09:57:44.145561 containerd[1603]: time="2026-04-21T09:57:44.144188061Z" level=info msg="metadata content store policy set" policy=shared Apr 21 09:57:44.151435 containerd[1603]: time="2026-04-21T09:57:44.151382391Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 09:57:44.152174 containerd[1603]: time="2026-04-21T09:57:44.151610979Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 09:57:44.152174 containerd[1603]: time="2026-04-21T09:57:44.151636635Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 09:57:44.152174 containerd[1603]: time="2026-04-21T09:57:44.151755485Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 09:57:44.152174 containerd[1603]: time="2026-04-21T09:57:44.151776505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 09:57:44.152534 containerd[1603]: time="2026-04-21T09:57:44.152483730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 09:57:44.154192 containerd[1603]: time="2026-04-21T09:57:44.154144390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 09:57:44.154421 containerd[1603]: time="2026-04-21T09:57:44.154381570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 09:57:44.154478 containerd[1603]: time="2026-04-21T09:57:44.154428007Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 09:57:44.154478 containerd[1603]: time="2026-04-21T09:57:44.154442434Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 09:57:44.154478 containerd[1603]: time="2026-04-21T09:57:44.154456740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154478 containerd[1603]: time="2026-04-21T09:57:44.154469768Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154553 containerd[1603]: time="2026-04-21T09:57:44.154491069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154553 containerd[1603]: time="2026-04-21T09:57:44.154513648Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154553 containerd[1603]: time="2026-04-21T09:57:44.154530832Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154611 containerd[1603]: time="2026-04-21T09:57:44.154552212Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154611 containerd[1603]: time="2026-04-21T09:57:44.154574951Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154611 containerd[1603]: time="2026-04-21T09:57:44.154588099Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 09:57:44.154661 containerd[1603]: time="2026-04-21T09:57:44.154610478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154661 containerd[1603]: time="2026-04-21T09:57:44.154624785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154661 containerd[1603]: time="2026-04-21T09:57:44.154644806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154723 containerd[1603]: time="2026-04-21T09:57:44.154660432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154723 containerd[1603]: time="2026-04-21T09:57:44.154673100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154723 containerd[1603]: time="2026-04-21T09:57:44.154686887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154723 containerd[1603]: time="2026-04-21T09:57:44.154699795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154846 containerd[1603]: time="2026-04-21T09:57:44.154806296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154846 containerd[1603]: time="2026-04-21T09:57:44.154825598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154846 containerd[1603]: time="2026-04-21T09:57:44.154842023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.154899 containerd[1603]: time="2026-04-21T09:57:44.154855211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.155193 containerd[1603]: time="2026-04-21T09:57:44.155164125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.155293 containerd[1603]: time="2026-04-21T09:57:44.155197134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.155293 containerd[1603]: time="2026-04-21T09:57:44.155223510Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 09:57:44.155453 containerd[1603]: time="2026-04-21T09:57:44.155300798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.155479 containerd[1603]: time="2026-04-21T09:57:44.155456454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.155479 containerd[1603]: time="2026-04-21T09:57:44.155471560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 09:57:44.155690 containerd[1603]: time="2026-04-21T09:57:44.155667738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 09:57:44.156103 containerd[1603]: time="2026-04-21T09:57:44.156070364Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 09:57:44.156103 containerd[1603]: time="2026-04-21T09:57:44.156095101Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 09:57:44.156151 containerd[1603]: time="2026-04-21T09:57:44.156109448Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 09:57:44.156643 containerd[1603]: time="2026-04-21T09:57:44.156119559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.156643 containerd[1603]: time="2026-04-21T09:57:44.156283087Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 09:57:44.156643 containerd[1603]: time="2026-04-21T09:57:44.156299872Z" level=info msg="NRI interface is disabled by configuration." Apr 21 09:57:44.156643 containerd[1603]: time="2026-04-21T09:57:44.156313419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 09:57:44.157542 containerd[1603]: time="2026-04-21T09:57:44.157163911Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 09:57:44.157694 containerd[1603]: time="2026-04-21T09:57:44.157551831Z" level=info msg="Connect containerd service" Apr 21 09:57:44.157694 containerd[1603]: time="2026-04-21T09:57:44.157612894Z" level=info msg="using legacy CRI server" Apr 21 09:57:44.157694 containerd[1603]: time="2026-04-21T09:57:44.157622965Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 09:57:44.160108 containerd[1603]: time="2026-04-21T09:57:44.157970043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 09:57:44.160108 containerd[1603]: time="2026-04-21T09:57:44.159966632Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 09:57:44.160222 containerd[1603]: time="2026-04-21T09:57:44.160185149Z" level=info msg="Start subscribing containerd event" Apr 21 09:57:44.160303 containerd[1603]: time="2026-04-21T09:57:44.160278743Z" level=info msg="Start recovering state" Apr 21 09:57:44.160388 containerd[1603]: time="2026-04-21T09:57:44.160371737Z" level=info msg="Start event monitor" Apr 21 09:57:44.160417 containerd[1603]: time="2026-04-21T09:57:44.160389280Z" level=info msg="Start snapshots syncer" Apr 21 09:57:44.160417 containerd[1603]: time="2026-04-21T09:57:44.160399071Z" level=info msg="Start cni network conf syncer for default" Apr 21 09:57:44.160417 containerd[1603]: time="2026-04-21T09:57:44.160407463Z" level=info msg="Start streaming server" Apr 21 09:57:44.162522 containerd[1603]: time="2026-04-21T09:57:44.162418519Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 09:57:44.162622 containerd[1603]: time="2026-04-21T09:57:44.162561586Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 09:57:44.163266 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 09:57:44.164839 containerd[1603]: time="2026-04-21T09:57:44.164806265Z" level=info msg="containerd successfully booted in 0.095431s" Apr 21 09:57:44.439844 tar[1586]: linux-arm64/README.md Apr 21 09:57:44.460519 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 09:57:44.709244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:44.709891 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:44.835103 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 09:57:44.859350 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 09:57:44.868525 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 09:57:44.882643 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 09:57:44.882929 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 09:57:44.890051 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 09:57:44.905436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 09:57:44.913507 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 09:57:44.918389 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 21 09:57:44.919642 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 09:57:44.922303 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 09:57:44.923080 systemd[1]: Startup finished in 8.101s (kernel) + 4.945s (userspace) = 13.047s. Apr 21 09:57:45.262989 kubelet[1681]: E0421 09:57:45.262935 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:45.268362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:45.269502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:45.616041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 09:57:45.633147 systemd[1]: Started sshd@0-178.104.211.77:22-50.85.169.122:38712.service - OpenSSH per-connection server daemon (50.85.169.122:38712). Apr 21 09:57:45.763094 sshd[1713]: Accepted publickey for core from 50.85.169.122 port 38712 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:45.766833 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:45.779503 systemd-logind[1571]: New session 1 of user core. Apr 21 09:57:45.781502 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 09:57:45.786506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 09:57:45.804260 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 09:57:45.814444 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 09:57:45.820411 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 09:57:45.933101 systemd[1719]: Queued start job for default target default.target. Apr 21 09:57:45.934290 systemd[1719]: Created slice app.slice - User Application Slice. Apr 21 09:57:45.934394 systemd[1719]: Reached target paths.target - Paths. Apr 21 09:57:45.934406 systemd[1719]: Reached target timers.target - Timers. Apr 21 09:57:45.946234 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 09:57:45.960075 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 09:57:45.960160 systemd[1719]: Reached target sockets.target - Sockets. Apr 21 09:57:45.960178 systemd[1719]: Reached target basic.target - Basic System. Apr 21 09:57:45.960235 systemd[1719]: Reached target default.target - Main User Target. Apr 21 09:57:45.960272 systemd[1719]: Startup finished in 131ms. Apr 21 09:57:45.960447 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 09:57:45.971880 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 09:57:46.089452 systemd[1]: Started sshd@1-178.104.211.77:22-50.85.169.122:38724.service - OpenSSH per-connection server daemon (50.85.169.122:38724). Apr 21 09:57:46.222142 sshd[1731]: Accepted publickey for core from 50.85.169.122 port 38724 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:46.224091 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:46.228885 systemd-logind[1571]: New session 2 of user core. Apr 21 09:57:46.239630 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 09:57:46.340110 sshd[1731]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:46.346700 systemd[1]: sshd@1-178.104.211.77:22-50.85.169.122:38724.service: Deactivated successfully. Apr 21 09:57:46.349925 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 09:57:46.350800 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Apr 21 09:57:46.351926 systemd-logind[1571]: Removed session 2. Apr 21 09:57:46.366605 systemd[1]: Started sshd@2-178.104.211.77:22-50.85.169.122:38738.service - OpenSSH per-connection server daemon (50.85.169.122:38738). Apr 21 09:57:46.483270 sshd[1739]: Accepted publickey for core from 50.85.169.122 port 38738 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:46.485136 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:46.491777 systemd-logind[1571]: New session 3 of user core. Apr 21 09:57:46.494381 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 09:57:46.589360 sshd[1739]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:46.594526 systemd[1]: sshd@2-178.104.211.77:22-50.85.169.122:38738.service: Deactivated successfully. Apr 21 09:57:46.597916 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 09:57:46.598113 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Apr 21 09:57:46.599479 systemd-logind[1571]: Removed session 3. Apr 21 09:57:46.611423 systemd[1]: Started sshd@3-178.104.211.77:22-50.85.169.122:38750.service - OpenSSH per-connection server daemon (50.85.169.122:38750). Apr 21 09:57:46.727074 sshd[1747]: Accepted publickey for core from 50.85.169.122 port 38750 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:46.729362 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:46.734318 systemd-logind[1571]: New session 4 of user core. Apr 21 09:57:46.741619 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 09:57:46.843831 sshd[1747]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:46.851307 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Apr 21 09:57:46.852151 systemd[1]: sshd@3-178.104.211.77:22-50.85.169.122:38750.service: Deactivated successfully. Apr 21 09:57:46.855677 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 09:57:46.856866 systemd-logind[1571]: Removed session 4. Apr 21 09:57:46.866089 systemd[1]: Started sshd@4-178.104.211.77:22-50.85.169.122:38752.service - OpenSSH per-connection server daemon (50.85.169.122:38752). Apr 21 09:57:46.995870 sshd[1755]: Accepted publickey for core from 50.85.169.122 port 38752 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:47.000128 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:47.007792 systemd-logind[1571]: New session 5 of user core. Apr 21 09:57:47.016501 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 09:57:47.118295 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 09:57:47.118582 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:47.137935 sudo[1759]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:47.156055 sshd[1755]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:47.161454 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Apr 21 09:57:47.161815 systemd[1]: sshd@4-178.104.211.77:22-50.85.169.122:38752.service: Deactivated successfully. Apr 21 09:57:47.164784 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 09:57:47.166835 systemd-logind[1571]: Removed session 5. Apr 21 09:57:47.180493 systemd[1]: Started sshd@5-178.104.211.77:22-50.85.169.122:38758.service - OpenSSH per-connection server daemon (50.85.169.122:38758). Apr 21 09:57:47.309263 sshd[1764]: Accepted publickey for core from 50.85.169.122 port 38758 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:47.311460 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:47.319632 systemd-logind[1571]: New session 6 of user core. Apr 21 09:57:47.325583 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 09:57:47.414633 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 09:57:47.414914 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:47.419334 sudo[1769]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:47.425986 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 09:57:47.426678 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:47.442793 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:47.454841 auditctl[1772]: No rules Apr 21 09:57:47.456403 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 09:57:47.456823 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:47.465749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:47.497760 augenrules[1791]: No rules Apr 21 09:57:47.500710 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:47.504635 sudo[1768]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:47.523272 sshd[1764]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:47.531210 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Apr 21 09:57:47.531449 systemd[1]: sshd@5-178.104.211.77:22-50.85.169.122:38758.service: Deactivated successfully. Apr 21 09:57:47.534426 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 09:57:47.535583 systemd-logind[1571]: Removed session 6. Apr 21 09:57:47.546436 systemd[1]: Started sshd@6-178.104.211.77:22-50.85.169.122:38768.service - OpenSSH per-connection server daemon (50.85.169.122:38768). Apr 21 09:57:47.686794 sshd[1800]: Accepted publickey for core from 50.85.169.122 port 38768 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:47.688810 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:47.698597 systemd-logind[1571]: New session 7 of user core. Apr 21 09:57:47.709655 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 09:57:47.798000 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 09:57:47.798330 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:48.127480 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 09:57:48.127563 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 09:57:48.398941 dockerd[1820]: time="2026-04-21T09:57:48.398051006Z" level=info msg="Starting up" Apr 21 09:57:48.595212 systemd[1]: var-lib-docker-metacopy\x2dcheck67452206-merged.mount: Deactivated successfully. Apr 21 09:57:48.603963 dockerd[1820]: time="2026-04-21T09:57:48.603903331Z" level=info msg="Loading containers: start." Apr 21 09:57:48.711228 kernel: Initializing XFRM netlink socket Apr 21 09:57:48.812436 systemd-networkd[1247]: docker0: Link UP Apr 21 09:57:48.842083 dockerd[1820]: time="2026-04-21T09:57:48.841907035Z" level=info msg="Loading containers: done." Apr 21 09:57:48.860635 dockerd[1820]: time="2026-04-21T09:57:48.860557760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 09:57:48.860807 dockerd[1820]: time="2026-04-21T09:57:48.860677794Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 09:57:48.860807 dockerd[1820]: time="2026-04-21T09:57:48.860796110Z" level=info msg="Daemon has completed initialization" Apr 21 09:57:48.904286 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 09:57:48.905969 dockerd[1820]: time="2026-04-21T09:57:48.903979429Z" level=info msg="API listen on /run/docker.sock" Apr 21 09:57:49.478858 containerd[1603]: time="2026-04-21T09:57:49.478818348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 09:57:50.074929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853617895.mount: Deactivated successfully. Apr 21 09:57:51.273914 containerd[1603]: time="2026-04-21T09:57:51.273836665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:51.276470 containerd[1603]: time="2026-04-21T09:57:51.276417502Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 21 09:57:51.277442 containerd[1603]: time="2026-04-21T09:57:51.276665516Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:51.280883 containerd[1603]: time="2026-04-21T09:57:51.280839054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:51.282597 containerd[1603]: time="2026-04-21T09:57:51.282545047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.80368093s" Apr 21 09:57:51.282734 containerd[1603]: time="2026-04-21T09:57:51.282702514Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 21 09:57:51.283871 containerd[1603]: time="2026-04-21T09:57:51.283587312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 09:57:52.717469 containerd[1603]: time="2026-04-21T09:57:52.717353300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:52.719272 containerd[1603]: time="2026-04-21T09:57:52.719089620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 21 09:57:52.720715 containerd[1603]: time="2026-04-21T09:57:52.720255935Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:52.723728 containerd[1603]: time="2026-04-21T09:57:52.723686837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:52.725320 containerd[1603]: time="2026-04-21T09:57:52.725276278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.441651627s" Apr 21 09:57:52.725320 containerd[1603]: time="2026-04-21T09:57:52.725319134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 21 09:57:52.726234 containerd[1603]: time="2026-04-21T09:57:52.726207923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 09:57:54.195985 containerd[1603]: time="2026-04-21T09:57:54.195933436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.198789 containerd[1603]: time="2026-04-21T09:57:54.198753145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 21 09:57:54.200287 containerd[1603]: time="2026-04-21T09:57:54.200222991Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.204455 containerd[1603]: time="2026-04-21T09:57:54.204416073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.205835 containerd[1603]: time="2026-04-21T09:57:54.205787886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.479462308s" Apr 21 09:57:54.205835 containerd[1603]: time="2026-04-21T09:57:54.205833704Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 21 09:57:54.207144 containerd[1603]: time="2026-04-21T09:57:54.207115681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 09:57:55.323424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516555074.mount: Deactivated successfully. Apr 21 09:57:55.326088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 09:57:55.334401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:55.492617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:55.502764 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:55.551059 kubelet[2044]: E0421 09:57:55.550972 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:55.557346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:55.557545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:55.738533 containerd[1603]: time="2026-04-21T09:57:55.738371347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.740830 containerd[1603]: time="2026-04-21T09:57:55.740322578Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 21 09:57:55.742038 containerd[1603]: time="2026-04-21T09:57:55.741979343Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.748285 containerd[1603]: time="2026-04-21T09:57:55.748230935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.749757 containerd[1603]: time="2026-04-21T09:57:55.749719976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.542566234s" Apr 21 09:57:55.749757 containerd[1603]: time="2026-04-21T09:57:55.749759039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 21 09:57:55.750311 containerd[1603]: time="2026-04-21T09:57:55.750277962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 09:57:56.263884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount749655650.mount: Deactivated successfully. Apr 21 09:57:57.293055 containerd[1603]: time="2026-04-21T09:57:57.292732543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.294947 containerd[1603]: time="2026-04-21T09:57:57.294902514Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 21 09:57:57.295736 containerd[1603]: time="2026-04-21T09:57:57.295011790Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.304695 containerd[1603]: time="2026-04-21T09:57:57.304562126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.308044 containerd[1603]: time="2026-04-21T09:57:57.307771001Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.557437343s" Apr 21 09:57:57.308044 containerd[1603]: time="2026-04-21T09:57:57.307889754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 21 09:57:57.309674 containerd[1603]: time="2026-04-21T09:57:57.309444651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 09:57:57.783360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805836436.mount: Deactivated successfully. Apr 21 09:57:57.795082 containerd[1603]: time="2026-04-21T09:57:57.794908534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.797216 containerd[1603]: time="2026-04-21T09:57:57.797128725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 21 09:57:57.798751 containerd[1603]: time="2026-04-21T09:57:57.798528045Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.801448 containerd[1603]: time="2026-04-21T09:57:57.801395977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:57.803249 containerd[1603]: time="2026-04-21T09:57:57.803184341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.697587ms" Apr 21 09:57:57.803249 containerd[1603]: time="2026-04-21T09:57:57.803233241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 21 09:57:57.803892 containerd[1603]: time="2026-04-21T09:57:57.803836799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 09:57:58.312578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822867171.mount: Deactivated successfully. Apr 21 09:57:59.329103 containerd[1603]: time="2026-04-21T09:57:59.328681130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.331121 containerd[1603]: time="2026-04-21T09:57:59.331065291Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 21 09:57:59.332647 containerd[1603]: time="2026-04-21T09:57:59.332591074Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.336982 containerd[1603]: time="2026-04-21T09:57:59.335583901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.337128 containerd[1603]: time="2026-04-21T09:57:59.336913633Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.533036889s" Apr 21 09:57:59.337128 containerd[1603]: time="2026-04-21T09:57:59.337059941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 21 09:58:03.662746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:03.674882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:03.717229 systemd[1]: Reloading requested from client PID 2201 ('systemctl') (unit session-7.scope)... Apr 21 09:58:03.717245 systemd[1]: Reloading... Apr 21 09:58:03.840116 zram_generator::config[2241]: No configuration found. Apr 21 09:58:03.947910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:04.019287 systemd[1]: Reloading finished in 301 ms. Apr 21 09:58:04.080851 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 09:58:04.081276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:04.090795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:04.225243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:04.236521 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:04.285742 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:04.288099 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 09:58:04.288099 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:04.288099 kubelet[2302]: I0421 09:58:04.286182 2302 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 09:58:05.292270 kubelet[2302]: I0421 09:58:05.292188 2302 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 09:58:05.292270 kubelet[2302]: I0421 09:58:05.292222 2302 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:05.292864 kubelet[2302]: I0421 09:58:05.292486 2302 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 09:58:05.324293 kubelet[2302]: E0421 09:58:05.322555 2302 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.104.211.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 09:58:05.326397 kubelet[2302]: I0421 09:58:05.326360 2302 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:05.336437 kubelet[2302]: E0421 09:58:05.336356 2302 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:05.336437 kubelet[2302]: I0421 09:58:05.336423 2302 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:05.340759 kubelet[2302]: I0421 09:58:05.340718 2302 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 09:58:05.343407 kubelet[2302]: I0421 09:58:05.343331 2302 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:05.343586 kubelet[2302]: I0421 09:58:05.343391 2302 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-fa740892b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 09:58:05.343586 kubelet[2302]: I0421 09:58:05.343569 2302 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 09:58:05.343586 kubelet[2302]: I0421 09:58:05.343581 2302 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 09:58:05.343843 kubelet[2302]: I0421 09:58:05.343823 2302 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:05.348423 kubelet[2302]: I0421 09:58:05.348377 2302 kubelet.go:480] "Attempting to sync node with API server" Apr 21 09:58:05.348423 kubelet[2302]: I0421 09:58:05.348413 2302 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:05.348605 kubelet[2302]: I0421 09:58:05.348446 2302 kubelet.go:386] "Adding apiserver pod source" Apr 21 09:58:05.351045 kubelet[2302]: I0421 09:58:05.349977 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:05.356522 kubelet[2302]: E0421 09:58:05.356487 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.104.211.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-7-fa740892b3&limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 09:58:05.357913 kubelet[2302]: E0421 09:58:05.357832 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.104.211.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 09:58:05.358167 kubelet[2302]: I0421 09:58:05.358150 2302 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:05.358962 kubelet[2302]: I0421 09:58:05.358941 2302 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:05.359268 kubelet[2302]: W0421 09:58:05.359252 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 09:58:05.363260 kubelet[2302]: I0421 09:58:05.363191 2302 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 09:58:05.363260 kubelet[2302]: I0421 09:58:05.363252 2302 server.go:1289] "Started kubelet" Apr 21 09:58:05.364988 kubelet[2302]: I0421 09:58:05.363484 2302 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:05.364988 kubelet[2302]: I0421 09:58:05.364351 2302 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:05.370490 kubelet[2302]: E0421 09:58:05.366990 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.104.211.77:6443/api/v1/namespaces/default/events\": dial tcp 178.104.211.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-7-fa740892b3.18a856cb943faad8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-7-fa740892b3,UID:ci-4081-3-7-7-fa740892b3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-fa740892b3,},FirstTimestamp:2026-04-21 09:58:05.363210968 +0000 UTC m=+1.118667690,LastTimestamp:2026-04-21 09:58:05.363210968 +0000 UTC m=+1.118667690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-fa740892b3,}" Apr 21 09:58:05.370681 kubelet[2302]: I0421 09:58:05.370513 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:05.371190 kubelet[2302]: I0421 09:58:05.371071 2302 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:05.373182 kubelet[2302]: I0421 09:58:05.373075 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 09:58:05.374121 kubelet[2302]: I0421 09:58:05.373736 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:05.375590 kubelet[2302]: I0421 09:58:05.375535 2302 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 09:58:05.375831 kubelet[2302]: E0421 09:58:05.375799 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-fa740892b3\" not found" Apr 21 09:58:05.376334 kubelet[2302]: I0421 09:58:05.376305 2302 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 09:58:05.376485 kubelet[2302]: I0421 09:58:05.376464 2302 reconciler.go:26] "Reconciler: start to sync state" Apr 21 09:58:05.379903 kubelet[2302]: E0421 09:58:05.379555 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.104.211.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 09:58:05.380844 kubelet[2302]: E0421 09:58:05.380791 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.211.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-fa740892b3?timeout=10s\": dial tcp 178.104.211.77:6443: connect: connection refused" interval="200ms" Apr 21 09:58:05.381544 kubelet[2302]: I0421 09:58:05.381508 2302 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:05.384907 kubelet[2302]: I0421 09:58:05.384859 2302 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:05.384907 kubelet[2302]: I0421 09:58:05.384888 2302 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:05.392374 kubelet[2302]: E0421 09:58:05.392329 2302 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:05.408703 kubelet[2302]: I0421 09:58:05.408653 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:05.409956 kubelet[2302]: I0421 09:58:05.409934 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:05.410164 kubelet[2302]: I0421 09:58:05.410152 2302 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 09:58:05.410287 kubelet[2302]: I0421 09:58:05.410271 2302 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:05.410340 kubelet[2302]: I0421 09:58:05.410332 2302 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 09:58:05.410462 kubelet[2302]: E0421 09:58:05.410441 2302 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:05.416452 kubelet[2302]: E0421 09:58:05.416410 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.104.211.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 09:58:05.424109 kubelet[2302]: I0421 09:58:05.423843 2302 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 09:58:05.424109 kubelet[2302]: I0421 09:58:05.423861 2302 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:05.424109 kubelet[2302]: I0421 09:58:05.423881 2302 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:05.426776 kubelet[2302]: I0421 09:58:05.426751 2302 policy_none.go:49] "None policy: Start" Apr 21 09:58:05.427190 kubelet[2302]: I0421 09:58:05.426900 2302 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 09:58:05.427190 kubelet[2302]: I0421 09:58:05.426921 2302 state_mem.go:35] "Initializing new in-memory state store" Apr 21 09:58:05.432049 kubelet[2302]: E0421 09:58:05.431213 2302 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:05.432049 kubelet[2302]: I0421 09:58:05.431497 2302 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 09:58:05.432049 kubelet[2302]: I0421 09:58:05.431510 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:05.433418 kubelet[2302]: I0421 09:58:05.433394 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 09:58:05.434699 kubelet[2302]: E0421 09:58:05.434679 2302 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:05.434870 kubelet[2302]: E0421 09:58:05.434857 2302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-7-fa740892b3\" not found" Apr 21 09:58:05.523433 kubelet[2302]: E0421 09:58:05.523395 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.533081 kubelet[2302]: E0421 09:58:05.532889 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.536750 kubelet[2302]: E0421 09:58:05.536414 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.538854 kubelet[2302]: I0421 09:58:05.538713 2302 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.539571 kubelet[2302]: E0421 09:58:05.539528 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.211.77:6443/api/v1/nodes\": dial tcp 178.104.211.77:6443: connect: connection refused" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.578801 kubelet[2302]: I0421 09:58:05.578390 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.578801 kubelet[2302]: I0421 09:58:05.578456 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.578801 kubelet[2302]: I0421 09:58:05.578499 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.578801 kubelet[2302]: I0421 09:58:05.578560 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.578801 kubelet[2302]: I0421 09:58:05.578594 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.579245 kubelet[2302]: I0421 09:58:05.578632 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.579245 kubelet[2302]: I0421 09:58:05.578673 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.579245 kubelet[2302]: I0421 09:58:05.578720 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e68629c96ca0b424ac3f113a1d6d5825-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-fa740892b3\" (UID: \"e68629c96ca0b424ac3f113a1d6d5825\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.579245 kubelet[2302]: I0421 09:58:05.578751 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.581837 kubelet[2302]: E0421 09:58:05.581715 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.211.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-fa740892b3?timeout=10s\": dial tcp 178.104.211.77:6443: connect: connection refused" interval="400ms" Apr 21 09:58:05.742971 kubelet[2302]: I0421 09:58:05.742639 2302 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.743157 kubelet[2302]: E0421 09:58:05.743053 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.211.77:6443/api/v1/nodes\": dial tcp 178.104.211.77:6443: connect: connection refused" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:05.825264 containerd[1603]: time="2026-04-21T09:58:05.825205234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-fa740892b3,Uid:10f84141712548c6a49a641944d640ca,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.835186 containerd[1603]: time="2026-04-21T09:58:05.834967341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-fa740892b3,Uid:2517f5507b6e5865a7993778d815e467,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.838182 containerd[1603]: time="2026-04-21T09:58:05.837894322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-fa740892b3,Uid:e68629c96ca0b424ac3f113a1d6d5825,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.982713 kubelet[2302]: E0421 09:58:05.982594 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.211.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-fa740892b3?timeout=10s\": dial tcp 178.104.211.77:6443: connect: connection refused" interval="800ms" Apr 21 09:58:06.146388 kubelet[2302]: I0421 09:58:06.146215 2302 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:06.146892 kubelet[2302]: E0421 09:58:06.146820 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.104.211.77:6443/api/v1/nodes\": dial tcp 178.104.211.77:6443: connect: connection refused" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:06.237301 kubelet[2302]: E0421 09:58:06.237223 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.104.211.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 09:58:06.273033 kubelet[2302]: E0421 09:58:06.272170 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.104.211.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 09:58:06.276610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451507752.mount: Deactivated successfully. Apr 21 09:58:06.287993 containerd[1603]: time="2026-04-21T09:58:06.286698763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:06.288366 containerd[1603]: time="2026-04-21T09:58:06.288323999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:06.289763 containerd[1603]: time="2026-04-21T09:58:06.289712368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 21 09:58:06.290898 containerd[1603]: time="2026-04-21T09:58:06.290866789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:06.292238 containerd[1603]: time="2026-04-21T09:58:06.292193012Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:06.294078 containerd[1603]: time="2026-04-21T09:58:06.293577262Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:06.295061 containerd[1603]: time="2026-04-21T09:58:06.294820264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:06.296797 containerd[1603]: time="2026-04-21T09:58:06.296746232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:06.299277 containerd[1603]: time="2026-04-21T09:58:06.299092627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.77426ms" Apr 21 09:58:06.302589 containerd[1603]: time="2026-04-21T09:58:06.302546333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 466.737393ms" Apr 21 09:58:06.303803 containerd[1603]: time="2026-04-21T09:58:06.303758382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.73657ms" Apr 21 09:58:06.413187 kubelet[2302]: E0421 09:58:06.412948 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.104.211.77:6443/api/v1/namespaces/default/events\": dial tcp 178.104.211.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-7-fa740892b3.18a856cb943faad8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-7-fa740892b3,UID:ci-4081-3-7-7-fa740892b3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-fa740892b3,},FirstTimestamp:2026-04-21 09:58:05.363210968 +0000 UTC m=+1.118667690,LastTimestamp:2026-04-21 09:58:05.363210968 +0000 UTC m=+1.118667690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-fa740892b3,}" Apr 21 09:58:06.432354 kubelet[2302]: E0421 09:58:06.432312 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.104.211.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.104.211.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 09:58:06.440780 containerd[1603]: time="2026-04-21T09:58:06.440547105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:06.440780 containerd[1603]: time="2026-04-21T09:58:06.440610251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:06.440780 containerd[1603]: time="2026-04-21T09:58:06.440625247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.443263 containerd[1603]: time="2026-04-21T09:58:06.442948047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:06.443263 containerd[1603]: time="2026-04-21T09:58:06.443059542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:06.443263 containerd[1603]: time="2026-04-21T09:58:06.443071499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.443263 containerd[1603]: time="2026-04-21T09:58:06.443181555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.445602 containerd[1603]: time="2026-04-21T09:58:06.445523710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.455717 containerd[1603]: time="2026-04-21T09:58:06.455611051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:06.455990 containerd[1603]: time="2026-04-21T09:58:06.455897747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:06.456669 containerd[1603]: time="2026-04-21T09:58:06.456556359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.457316 containerd[1603]: time="2026-04-21T09:58:06.456938793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:06.527390 containerd[1603]: time="2026-04-21T09:58:06.526849535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-fa740892b3,Uid:10f84141712548c6a49a641944d640ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"842d1ea5a523ed714c60b25d8583c6933a73e50709d3fd508241c3f10238d911\"" Apr 21 09:58:06.541345 containerd[1603]: time="2026-04-21T09:58:06.541132096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-fa740892b3,Uid:e68629c96ca0b424ac3f113a1d6d5825,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d53d71aed581630b6d012e81478390eb1e6ecd2daf6f16cd0e2dfb23cbb4dd\"" Apr 21 09:58:06.542604 containerd[1603]: time="2026-04-21T09:58:06.542483954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-fa740892b3,Uid:2517f5507b6e5865a7993778d815e467,Namespace:kube-system,Attempt:0,} returns sandbox id \"491d42d5e8784ea0a92ebdf6e45da82d731652f69d630164f4394e969f992521\"" Apr 21 09:58:06.546123 containerd[1603]: time="2026-04-21T09:58:06.545601175Z" level=info msg="CreateContainer within sandbox \"842d1ea5a523ed714c60b25d8583c6933a73e50709d3fd508241c3f10238d911\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 09:58:06.554476 containerd[1603]: time="2026-04-21T09:58:06.554144582Z" level=info msg="CreateContainer within sandbox \"b5d53d71aed581630b6d012e81478390eb1e6ecd2daf6f16cd0e2dfb23cbb4dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 09:58:06.556277 containerd[1603]: time="2026-04-21T09:58:06.556225436Z" level=info msg="CreateContainer within sandbox \"491d42d5e8784ea0a92ebdf6e45da82d731652f69d630164f4394e969f992521\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 09:58:06.566479 containerd[1603]: time="2026-04-21T09:58:06.566422712Z" level=info msg="CreateContainer within sandbox \"842d1ea5a523ed714c60b25d8583c6933a73e50709d3fd508241c3f10238d911\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edeb1af636511faa788c2dd42baa84c1a5c7eb4532b4cf6dc801a1fd56af4b87\"" Apr 21 09:58:06.567516 containerd[1603]: time="2026-04-21T09:58:06.567483154Z" level=info msg="StartContainer for \"edeb1af636511faa788c2dd42baa84c1a5c7eb4532b4cf6dc801a1fd56af4b87\"" Apr 21 09:58:06.577983 containerd[1603]: time="2026-04-21T09:58:06.577764532Z" level=info msg="CreateContainer within sandbox \"b5d53d71aed581630b6d012e81478390eb1e6ecd2daf6f16cd0e2dfb23cbb4dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b\"" Apr 21 09:58:06.579539 containerd[1603]: time="2026-04-21T09:58:06.579449874Z" level=info msg="StartContainer for \"9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b\"" Apr 21 09:58:06.585866 containerd[1603]: time="2026-04-21T09:58:06.585786295Z" level=info msg="CreateContainer within sandbox \"491d42d5e8784ea0a92ebdf6e45da82d731652f69d630164f4394e969f992521\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011\"" Apr 21 09:58:06.587891 containerd[1603]: time="2026-04-21T09:58:06.586669097Z" level=info msg="StartContainer for \"36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011\"" Apr 21 09:58:06.654999 containerd[1603]: time="2026-04-21T09:58:06.654946285Z" level=info msg="StartContainer for \"edeb1af636511faa788c2dd42baa84c1a5c7eb4532b4cf6dc801a1fd56af4b87\" returns successfully" Apr 21 09:58:06.693360 containerd[1603]: time="2026-04-21T09:58:06.693152888Z" level=info msg="StartContainer for \"36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011\" returns successfully" Apr 21 09:58:06.722986 containerd[1603]: time="2026-04-21T09:58:06.722367465Z" level=info msg="StartContainer for \"9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b\" returns successfully" Apr 21 09:58:06.951518 kubelet[2302]: I0421 09:58:06.951224 2302 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:07.440509 kubelet[2302]: E0421 09:58:07.434011 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:07.450686 kubelet[2302]: E0421 09:58:07.447581 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:07.454001 kubelet[2302]: E0421 09:58:07.452393 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.323509 kubelet[2302]: E0421 09:58:08.323459 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.358734 kubelet[2302]: I0421 09:58:08.358406 2302 apiserver.go:52] "Watching apiserver" Apr 21 09:58:08.376573 kubelet[2302]: I0421 09:58:08.376454 2302 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 09:58:08.451227 kubelet[2302]: E0421 09:58:08.450860 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.452173 kubelet[2302]: E0421 09:58:08.451476 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-fa740892b3\" not found" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.478053 kubelet[2302]: I0421 09:58:08.475342 2302 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.478053 kubelet[2302]: I0421 09:58:08.476449 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.556914 kubelet[2302]: E0421 09:58:08.556442 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.556914 kubelet[2302]: I0421 09:58:08.556641 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.572366 kubelet[2302]: E0421 09:58:08.572312 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.572366 kubelet[2302]: I0421 09:58:08.572355 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:08.580342 kubelet[2302]: E0421 09:58:08.580198 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-fa740892b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:09.452699 kubelet[2302]: I0421 09:58:09.452459 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:09.823756 kubelet[2302]: I0421 09:58:09.821943 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:10.261659 kubelet[2302]: I0421 09:58:10.261212 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:10.690965 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-7.scope)... Apr 21 09:58:10.691026 systemd[1]: Reloading... Apr 21 09:58:10.790295 zram_generator::config[2623]: No configuration found. Apr 21 09:58:10.913039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:10.999915 systemd[1]: Reloading finished in 308 ms. Apr 21 09:58:11.038040 kubelet[2302]: I0421 09:58:11.037789 2302 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:11.037980 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:11.052412 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 09:58:11.053436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:11.060635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:11.224334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:11.240512 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:11.314388 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:11.314388 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 09:58:11.314388 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:11.314388 kubelet[2678]: I0421 09:58:11.313202 2678 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 09:58:11.325705 kubelet[2678]: I0421 09:58:11.325666 2678 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 09:58:11.325870 kubelet[2678]: I0421 09:58:11.325859 2678 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:11.326276 kubelet[2678]: I0421 09:58:11.326254 2678 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 09:58:11.328097 kubelet[2678]: I0421 09:58:11.328070 2678 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 09:58:11.331382 kubelet[2678]: I0421 09:58:11.331350 2678 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:11.335785 kubelet[2678]: E0421 09:58:11.335736 2678 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:11.335785 kubelet[2678]: I0421 09:58:11.335781 2678 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:11.339612 kubelet[2678]: I0421 09:58:11.339584 2678 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 09:58:11.340268 kubelet[2678]: I0421 09:58:11.340183 2678 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:11.340482 kubelet[2678]: I0421 09:58:11.340275 2678 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-fa740892b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 09:58:11.340482 kubelet[2678]: I0421 09:58:11.340483 2678 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 09:58:11.340753 kubelet[2678]: I0421 09:58:11.340493 2678 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 09:58:11.340753 kubelet[2678]: I0421 09:58:11.340568 2678 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:11.340863 kubelet[2678]: I0421 09:58:11.340813 2678 kubelet.go:480] "Attempting to sync node with API server" Apr 21 09:58:11.340863 kubelet[2678]: I0421 09:58:11.340834 2678 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:11.343108 kubelet[2678]: I0421 09:58:11.342349 2678 kubelet.go:386] "Adding apiserver pod source" Apr 21 09:58:11.343108 kubelet[2678]: I0421 09:58:11.342391 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:11.345542 kubelet[2678]: I0421 09:58:11.345516 2678 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:11.348722 kubelet[2678]: I0421 09:58:11.348358 2678 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:11.357070 kubelet[2678]: I0421 09:58:11.356553 2678 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 09:58:11.357070 kubelet[2678]: I0421 09:58:11.356604 2678 server.go:1289] "Started kubelet" Apr 21 09:58:11.359405 kubelet[2678]: I0421 09:58:11.359383 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 09:58:11.365515 kubelet[2678]: I0421 09:58:11.365149 2678 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:11.366857 kubelet[2678]: I0421 09:58:11.366411 2678 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:11.367682 kubelet[2678]: I0421 09:58:11.367113 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:11.371483 kubelet[2678]: I0421 09:58:11.371330 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:11.378216 kubelet[2678]: I0421 09:58:11.375383 2678 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 09:58:11.378216 kubelet[2678]: E0421 09:58:11.375643 2678 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-fa740892b3\" not found" Apr 21 09:58:11.379062 kubelet[2678]: I0421 09:58:11.378596 2678 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 09:58:11.379062 kubelet[2678]: I0421 09:58:11.378725 2678 reconciler.go:26] "Reconciler: start to sync state" Apr 21 09:58:11.379813 kubelet[2678]: I0421 09:58:11.379332 2678 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:11.394954 kubelet[2678]: I0421 09:58:11.394817 2678 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:11.395393 kubelet[2678]: I0421 09:58:11.395368 2678 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:11.399753 kubelet[2678]: I0421 09:58:11.399584 2678 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:11.400638 kubelet[2678]: I0421 09:58:11.400574 2678 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:11.401749 kubelet[2678]: I0421 09:58:11.401402 2678 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:11.401749 kubelet[2678]: I0421 09:58:11.401455 2678 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 09:58:11.401749 kubelet[2678]: I0421 09:58:11.401479 2678 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:11.401749 kubelet[2678]: I0421 09:58:11.401489 2678 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 09:58:11.402144 kubelet[2678]: E0421 09:58:11.401539 2678 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:11.411011 kubelet[2678]: E0421 09:58:11.410959 2678 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:11.478426 kubelet[2678]: I0421 09:58:11.478401 2678 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 09:58:11.478659 kubelet[2678]: I0421 09:58:11.478632 2678 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:11.478730 kubelet[2678]: I0421 09:58:11.478721 2678 state_mem.go:36] "Initialized new in-memory state store" Apr 21 09:58:11.478991 kubelet[2678]: I0421 09:58:11.478954 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 09:58:11.479094 kubelet[2678]: I0421 09:58:11.479069 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 09:58:11.479142 kubelet[2678]: I0421 09:58:11.479133 2678 policy_none.go:49] "None policy: Start" Apr 21 09:58:11.479221 kubelet[2678]: I0421 09:58:11.479210 2678 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 09:58:11.479278 kubelet[2678]: I0421 09:58:11.479271 2678 state_mem.go:35] "Initializing new in-memory state store" Apr 21 09:58:11.479421 kubelet[2678]: I0421 09:58:11.479410 2678 state_mem.go:75] "Updated machine memory state" Apr 21 09:58:11.480734 kubelet[2678]: E0421 09:58:11.480713 2678 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:11.481229 kubelet[2678]: I0421 09:58:11.481212 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 09:58:11.482151 kubelet[2678]: I0421 09:58:11.482112 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:11.482930 kubelet[2678]: I0421 09:58:11.482906 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 09:58:11.489223 kubelet[2678]: E0421 09:58:11.489175 2678 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:11.503524 kubelet[2678]: I0421 09:58:11.503490 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.504668 kubelet[2678]: I0421 09:58:11.503998 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.504668 kubelet[2678]: I0421 09:58:11.504490 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.515269 kubelet[2678]: E0421 09:58:11.515213 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.517612 kubelet[2678]: E0421 09:58:11.517373 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-fa740892b3\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.517612 kubelet[2678]: E0421 09:58:11.517535 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.592924 kubelet[2678]: I0421 09:58:11.592364 2678 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.605996 kubelet[2678]: I0421 09:58:11.605815 2678 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.605996 kubelet[2678]: I0421 09:58:11.605946 2678 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681308 kubelet[2678]: I0421 09:58:11.680896 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681460 kubelet[2678]: I0421 09:58:11.681349 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681711 kubelet[2678]: I0421 09:58:11.681535 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681711 kubelet[2678]: I0421 09:58:11.681603 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681711 kubelet[2678]: I0421 09:58:11.681630 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2517f5507b6e5865a7993778d815e467-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-fa740892b3\" (UID: \"2517f5507b6e5865a7993778d815e467\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681866 kubelet[2678]: I0421 09:58:11.681725 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e68629c96ca0b424ac3f113a1d6d5825-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-fa740892b3\" (UID: \"e68629c96ca0b424ac3f113a1d6d5825\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.681866 kubelet[2678]: I0421 09:58:11.681836 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.682008 kubelet[2678]: I0421 09:58:11.681879 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.682008 kubelet[2678]: I0421 09:58:11.681910 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10f84141712548c6a49a641944d640ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" (UID: \"10f84141712548c6a49a641944d640ca\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:11.696120 sudo[2713]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 09:58:11.696437 sudo[2713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 09:58:12.171830 sudo[2713]: pam_unix(sudo:session): session closed for user root Apr 21 09:58:12.344831 kubelet[2678]: I0421 09:58:12.344772 2678 apiserver.go:52] "Watching apiserver" Apr 21 09:58:12.380066 kubelet[2678]: I0421 09:58:12.378982 2678 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 09:58:12.447055 kubelet[2678]: I0421 09:58:12.446860 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:12.458669 kubelet[2678]: E0421 09:58:12.458633 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-fa740892b3\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" Apr 21 09:58:12.489397 kubelet[2678]: I0421 09:58:12.489293 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-fa740892b3" podStartSLOduration=3.489270195 podStartE2EDuration="3.489270195s" podCreationTimestamp="2026-04-21 09:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:12.476299168 +0000 UTC m=+1.226608556" watchObservedRunningTime="2026-04-21 09:58:12.489270195 +0000 UTC m=+1.239579583" Apr 21 09:58:12.508044 kubelet[2678]: I0421 09:58:12.507834 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-7-fa740892b3" podStartSLOduration=2.507806337 podStartE2EDuration="2.507806337s" podCreationTimestamp="2026-04-21 09:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:12.489732325 +0000 UTC m=+1.240041713" watchObservedRunningTime="2026-04-21 09:58:12.507806337 +0000 UTC m=+1.258115725" Apr 21 09:58:12.524496 kubelet[2678]: I0421 09:58:12.524424 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-7-fa740892b3" podStartSLOduration=3.524407813 podStartE2EDuration="3.524407813s" podCreationTimestamp="2026-04-21 09:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:12.509179848 +0000 UTC m=+1.259489236" watchObservedRunningTime="2026-04-21 09:58:12.524407813 +0000 UTC m=+1.274717201" Apr 21 09:58:14.654472 sudo[1804]: pam_unix(sudo:session): session closed for user root Apr 21 09:58:14.671738 sshd[1800]: pam_unix(sshd:session): session closed for user core Apr 21 09:58:14.678245 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Apr 21 09:58:14.678543 systemd[1]: sshd@6-178.104.211.77:22-50.85.169.122:38768.service: Deactivated successfully. Apr 21 09:58:14.682917 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 09:58:14.685734 systemd-logind[1571]: Removed session 7. Apr 21 09:58:15.300635 kubelet[2678]: I0421 09:58:15.300594 2678 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 09:58:15.302079 containerd[1603]: time="2026-04-21T09:58:15.301912297Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 09:58:15.302618 kubelet[2678]: I0421 09:58:15.302242 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.222913 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-hostproc\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.222958 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cni-path\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.222979 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-config-path\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.222997 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-kernel\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.223024 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-bpf-maps\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223421 kubelet[2678]: I0421 09:58:16.223043 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-xtables-lock\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223724 kubelet[2678]: I0421 09:58:16.223057 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzpk4\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-kube-api-access-pzpk4\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223724 kubelet[2678]: I0421 09:58:16.223089 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzg4s\" (UniqueName: \"kubernetes.io/projected/fb5bf43d-0e39-4f39-b715-afe4bcd513b6-kube-api-access-lzg4s\") pod \"kube-proxy-2t9sk\" (UID: \"fb5bf43d-0e39-4f39-b715-afe4bcd513b6\") " pod="kube-system/kube-proxy-2t9sk" Apr 21 09:58:16.223724 kubelet[2678]: I0421 09:58:16.223113 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-run\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223724 kubelet[2678]: I0421 09:58:16.223129 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-net\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223724 kubelet[2678]: I0421 09:58:16.223143 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb5bf43d-0e39-4f39-b715-afe4bcd513b6-lib-modules\") pod \"kube-proxy-2t9sk\" (UID: \"fb5bf43d-0e39-4f39-b715-afe4bcd513b6\") " pod="kube-system/kube-proxy-2t9sk" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223161 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-cgroup\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223174 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-etc-cni-netd\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223188 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-lib-modules\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223203 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a17d613a-0513-4d46-a286-dab01a85d70c-clustermesh-secrets\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223223 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-hubble-tls\") pod \"cilium-qhq8n\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " pod="kube-system/cilium-qhq8n" Apr 21 09:58:16.223828 kubelet[2678]: I0421 09:58:16.223240 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb5bf43d-0e39-4f39-b715-afe4bcd513b6-kube-proxy\") pod \"kube-proxy-2t9sk\" (UID: \"fb5bf43d-0e39-4f39-b715-afe4bcd513b6\") " pod="kube-system/kube-proxy-2t9sk" Apr 21 09:58:16.223940 kubelet[2678]: I0421 09:58:16.223257 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb5bf43d-0e39-4f39-b715-afe4bcd513b6-xtables-lock\") pod \"kube-proxy-2t9sk\" (UID: \"fb5bf43d-0e39-4f39-b715-afe4bcd513b6\") " pod="kube-system/kube-proxy-2t9sk" Apr 21 09:58:16.416458 containerd[1603]: time="2026-04-21T09:58:16.415840989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhq8n,Uid:a17d613a-0513-4d46-a286-dab01a85d70c,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:16.425671 containerd[1603]: time="2026-04-21T09:58:16.425443701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2t9sk,Uid:fb5bf43d-0e39-4f39-b715-afe4bcd513b6,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:16.426291 kubelet[2678]: I0421 09:58:16.426232 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq8d\" (UniqueName: \"kubernetes.io/projected/7a908e24-c00a-4505-9408-f2cc839230a9-kube-api-access-fcq8d\") pod \"cilium-operator-6c4d7847fc-plsk9\" (UID: \"7a908e24-c00a-4505-9408-f2cc839230a9\") " pod="kube-system/cilium-operator-6c4d7847fc-plsk9" Apr 21 09:58:16.427625 kubelet[2678]: I0421 09:58:16.427445 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a908e24-c00a-4505-9408-f2cc839230a9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-plsk9\" (UID: \"7a908e24-c00a-4505-9408-f2cc839230a9\") " pod="kube-system/cilium-operator-6c4d7847fc-plsk9" Apr 21 09:58:16.459416 containerd[1603]: time="2026-04-21T09:58:16.459075470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:16.461483 containerd[1603]: time="2026-04-21T09:58:16.461233577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:16.461483 containerd[1603]: time="2026-04-21T09:58:16.461310128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:16.461483 containerd[1603]: time="2026-04-21T09:58:16.461333045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.462234 containerd[1603]: time="2026-04-21T09:58:16.461447472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.465836 containerd[1603]: time="2026-04-21T09:58:16.459634005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:16.465836 containerd[1603]: time="2026-04-21T09:58:16.465333256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.465836 containerd[1603]: time="2026-04-21T09:58:16.465548110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.515150 containerd[1603]: time="2026-04-21T09:58:16.514574112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhq8n,Uid:a17d613a-0513-4d46-a286-dab01a85d70c,Namespace:kube-system,Attempt:0,} returns sandbox id \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\"" Apr 21 09:58:16.515899 containerd[1603]: time="2026-04-21T09:58:16.515874679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2t9sk,Uid:fb5bf43d-0e39-4f39-b715-afe4bcd513b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ffdc0f3ab3a40d971ef5a224b7b22419c04402c33407f5a54df753584ca2154\"" Apr 21 09:58:16.518946 containerd[1603]: time="2026-04-21T09:58:16.518913002Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 09:58:16.525223 containerd[1603]: time="2026-04-21T09:58:16.524320807Z" level=info msg="CreateContainer within sandbox \"4ffdc0f3ab3a40d971ef5a224b7b22419c04402c33407f5a54df753584ca2154\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 09:58:16.547247 containerd[1603]: time="2026-04-21T09:58:16.547192681Z" level=info msg="CreateContainer within sandbox \"4ffdc0f3ab3a40d971ef5a224b7b22419c04402c33407f5a54df753584ca2154\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d4bca867cacf5d8d01dd4fafd48563df024edcee68a37fb8c87833a5fced90d5\"" Apr 21 09:58:16.548626 containerd[1603]: time="2026-04-21T09:58:16.548482729Z" level=info msg="StartContainer for \"d4bca867cacf5d8d01dd4fafd48563df024edcee68a37fb8c87833a5fced90d5\"" Apr 21 09:58:16.616133 containerd[1603]: time="2026-04-21T09:58:16.616088229Z" level=info msg="StartContainer for \"d4bca867cacf5d8d01dd4fafd48563df024edcee68a37fb8c87833a5fced90d5\" returns successfully" Apr 21 09:58:16.716184 containerd[1603]: time="2026-04-21T09:58:16.716121520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-plsk9,Uid:7a908e24-c00a-4505-9408-f2cc839230a9,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:16.757132 containerd[1603]: time="2026-04-21T09:58:16.756796862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:16.757132 containerd[1603]: time="2026-04-21T09:58:16.756917368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:16.757132 containerd[1603]: time="2026-04-21T09:58:16.756934606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.758461 containerd[1603]: time="2026-04-21T09:58:16.758216896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.812107 containerd[1603]: time="2026-04-21T09:58:16.811987500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-plsk9,Uid:7a908e24-c00a-4505-9408-f2cc839230a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\"" Apr 21 09:58:20.364832 kubelet[2678]: I0421 09:58:20.364766 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2t9sk" podStartSLOduration=4.364745692 podStartE2EDuration="4.364745692s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:17.484917775 +0000 UTC m=+6.235227203" watchObservedRunningTime="2026-04-21 09:58:20.364745692 +0000 UTC m=+9.115055120" Apr 21 09:58:21.477267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197833775.mount: Deactivated successfully. Apr 21 09:58:22.948679 containerd[1603]: time="2026-04-21T09:58:22.948471719Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:22.951231 containerd[1603]: time="2026-04-21T09:58:22.950238338Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 21 09:58:22.952670 containerd[1603]: time="2026-04-21T09:58:22.952591191Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:22.955441 containerd[1603]: time="2026-04-21T09:58:22.954806614Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.435490899s" Apr 21 09:58:22.955441 containerd[1603]: time="2026-04-21T09:58:22.954858210Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 21 09:58:22.957960 containerd[1603]: time="2026-04-21T09:58:22.957737780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 09:58:22.962340 containerd[1603]: time="2026-04-21T09:58:22.962294177Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 09:58:22.984365 containerd[1603]: time="2026-04-21T09:58:22.984316261Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\"" Apr 21 09:58:22.985731 containerd[1603]: time="2026-04-21T09:58:22.985581520Z" level=info msg="StartContainer for \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\"" Apr 21 09:58:23.046454 containerd[1603]: time="2026-04-21T09:58:23.046402736Z" level=info msg="StartContainer for \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\" returns successfully" Apr 21 09:58:23.281577 containerd[1603]: time="2026-04-21T09:58:23.281471204Z" level=info msg="shim disconnected" id=0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9 namespace=k8s.io Apr 21 09:58:23.281577 containerd[1603]: time="2026-04-21T09:58:23.281563917Z" level=warning msg="cleaning up after shim disconnected" id=0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9 namespace=k8s.io Apr 21 09:58:23.281577 containerd[1603]: time="2026-04-21T09:58:23.281574756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:23.494926 containerd[1603]: time="2026-04-21T09:58:23.494407286Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 09:58:23.521994 containerd[1603]: time="2026-04-21T09:58:23.521914589Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\"" Apr 21 09:58:23.524089 containerd[1603]: time="2026-04-21T09:58:23.522846800Z" level=info msg="StartContainer for \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\"" Apr 21 09:58:23.584973 containerd[1603]: time="2026-04-21T09:58:23.584326204Z" level=info msg="StartContainer for \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\" returns successfully" Apr 21 09:58:23.597436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 09:58:23.598196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:58:23.598269 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:58:23.605608 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:58:23.628961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:58:23.634358 containerd[1603]: time="2026-04-21T09:58:23.634199435Z" level=info msg="shim disconnected" id=f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643 namespace=k8s.io Apr 21 09:58:23.634358 containerd[1603]: time="2026-04-21T09:58:23.634261751Z" level=warning msg="cleaning up after shim disconnected" id=f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643 namespace=k8s.io Apr 21 09:58:23.634358 containerd[1603]: time="2026-04-21T09:58:23.634270550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:23.978131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9-rootfs.mount: Deactivated successfully. Apr 21 09:58:24.513551 containerd[1603]: time="2026-04-21T09:58:24.513267588Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 09:58:24.520235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031225477.mount: Deactivated successfully. Apr 21 09:58:24.548154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218603967.mount: Deactivated successfully. Apr 21 09:58:24.552652 containerd[1603]: time="2026-04-21T09:58:24.552394326Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\"" Apr 21 09:58:24.555737 containerd[1603]: time="2026-04-21T09:58:24.555695575Z" level=info msg="StartContainer for \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\"" Apr 21 09:58:24.648616 containerd[1603]: time="2026-04-21T09:58:24.648535548Z" level=info msg="StartContainer for \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\" returns successfully" Apr 21 09:58:24.692122 containerd[1603]: time="2026-04-21T09:58:24.691914628Z" level=info msg="shim disconnected" id=6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa namespace=k8s.io Apr 21 09:58:24.692122 containerd[1603]: time="2026-04-21T09:58:24.691997502Z" level=warning msg="cleaning up after shim disconnected" id=6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa namespace=k8s.io Apr 21 09:58:24.692122 containerd[1603]: time="2026-04-21T09:58:24.692006582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:24.710605 containerd[1603]: time="2026-04-21T09:58:24.710319818Z" level=warning msg="cleanup warnings time=\"2026-04-21T09:58:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 09:58:24.976569 containerd[1603]: time="2026-04-21T09:58:24.976481445Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:24.977687 containerd[1603]: time="2026-04-21T09:58:24.977646163Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 21 09:58:24.979361 containerd[1603]: time="2026-04-21T09:58:24.978840480Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:24.980517 containerd[1603]: time="2026-04-21T09:58:24.980486004Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.022608795s" Apr 21 09:58:24.980640 containerd[1603]: time="2026-04-21T09:58:24.980621595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 21 09:58:24.987221 containerd[1603]: time="2026-04-21T09:58:24.987182615Z" level=info msg="CreateContainer within sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 09:58:25.006984 containerd[1603]: time="2026-04-21T09:58:25.006941536Z" level=info msg="CreateContainer within sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\"" Apr 21 09:58:25.008673 containerd[1603]: time="2026-04-21T09:58:25.008538071Z" level=info msg="StartContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\"" Apr 21 09:58:25.068201 containerd[1603]: time="2026-04-21T09:58:25.068076360Z" level=info msg="StartContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" returns successfully" Apr 21 09:58:25.518011 containerd[1603]: time="2026-04-21T09:58:25.517957001Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 09:58:25.537302 containerd[1603]: time="2026-04-21T09:58:25.536698530Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\"" Apr 21 09:58:25.539000 containerd[1603]: time="2026-04-21T09:58:25.537965207Z" level=info msg="StartContainer for \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\"" Apr 21 09:58:25.647669 containerd[1603]: time="2026-04-21T09:58:25.647619802Z" level=info msg="StartContainer for \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\" returns successfully" Apr 21 09:58:25.663677 kubelet[2678]: I0421 09:58:25.663601 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-plsk9" podStartSLOduration=1.496241379 podStartE2EDuration="9.663582833s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="2026-04-21 09:58:16.814266472 +0000 UTC m=+5.564575860" lastFinishedPulling="2026-04-21 09:58:24.981607926 +0000 UTC m=+13.731917314" observedRunningTime="2026-04-21 09:58:25.535642839 +0000 UTC m=+14.285952307" watchObservedRunningTime="2026-04-21 09:58:25.663582833 +0000 UTC m=+14.413892181" Apr 21 09:58:25.730388 containerd[1603]: time="2026-04-21T09:58:25.730332368Z" level=info msg="shim disconnected" id=b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a namespace=k8s.io Apr 21 09:58:25.731130 containerd[1603]: time="2026-04-21T09:58:25.730658426Z" level=warning msg="cleaning up after shim disconnected" id=b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a namespace=k8s.io Apr 21 09:58:25.731130 containerd[1603]: time="2026-04-21T09:58:25.730676745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:26.529637 containerd[1603]: time="2026-04-21T09:58:26.529589062Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 09:58:26.551675 containerd[1603]: time="2026-04-21T09:58:26.551622384Z" level=info msg="CreateContainer within sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\"" Apr 21 09:58:26.553008 containerd[1603]: time="2026-04-21T09:58:26.552502530Z" level=info msg="StartContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\"" Apr 21 09:58:26.620220 containerd[1603]: time="2026-04-21T09:58:26.620009972Z" level=info msg="StartContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" returns successfully" Apr 21 09:58:26.774051 kubelet[2678]: I0421 09:58:26.773226 2678 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 09:58:26.907358 kubelet[2678]: I0421 09:58:26.906884 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be30f11a-e637-4fa6-ad76-0762228eb9b7-config-volume\") pod \"coredns-674b8bbfcf-gknrz\" (UID: \"be30f11a-e637-4fa6-ad76-0762228eb9b7\") " pod="kube-system/coredns-674b8bbfcf-gknrz" Apr 21 09:58:26.907358 kubelet[2678]: I0421 09:58:26.906932 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a87a3d-5205-4752-923b-e1f834a7227a-config-volume\") pod \"coredns-674b8bbfcf-52gbr\" (UID: \"76a87a3d-5205-4752-923b-e1f834a7227a\") " pod="kube-system/coredns-674b8bbfcf-52gbr" Apr 21 09:58:26.907358 kubelet[2678]: I0421 09:58:26.907073 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrb6\" (UniqueName: \"kubernetes.io/projected/be30f11a-e637-4fa6-ad76-0762228eb9b7-kube-api-access-5rrb6\") pod \"coredns-674b8bbfcf-gknrz\" (UID: \"be30f11a-e637-4fa6-ad76-0762228eb9b7\") " pod="kube-system/coredns-674b8bbfcf-gknrz" Apr 21 09:58:26.907358 kubelet[2678]: I0421 09:58:26.907167 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lnj\" (UniqueName: \"kubernetes.io/projected/76a87a3d-5205-4752-923b-e1f834a7227a-kube-api-access-w7lnj\") pod \"coredns-674b8bbfcf-52gbr\" (UID: \"76a87a3d-5205-4752-923b-e1f834a7227a\") " pod="kube-system/coredns-674b8bbfcf-52gbr" Apr 21 09:58:27.126059 containerd[1603]: time="2026-04-21T09:58:27.125235810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gknrz,Uid:be30f11a-e637-4fa6-ad76-0762228eb9b7,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:27.145559 containerd[1603]: time="2026-04-21T09:58:27.145006309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52gbr,Uid:76a87a3d-5205-4752-923b-e1f834a7227a,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:27.547080 kubelet[2678]: I0421 09:58:27.545096 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qhq8n" podStartSLOduration=5.107076697 podStartE2EDuration="11.545067407s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="2026-04-21 09:58:16.518513689 +0000 UTC m=+5.268823117" lastFinishedPulling="2026-04-21 09:58:22.956504439 +0000 UTC m=+11.706813827" observedRunningTime="2026-04-21 09:58:27.544268053 +0000 UTC m=+16.294577481" watchObservedRunningTime="2026-04-21 09:58:27.545067407 +0000 UTC m=+16.295376835" Apr 21 09:58:28.790363 systemd-networkd[1247]: cilium_host: Link UP Apr 21 09:58:28.791077 systemd-networkd[1247]: cilium_net: Link UP Apr 21 09:58:28.791848 systemd-networkd[1247]: cilium_net: Gained carrier Apr 21 09:58:28.792400 systemd-networkd[1247]: cilium_host: Gained carrier Apr 21 09:58:28.890235 update_engine[1576]: I20260421 09:58:28.890134 1576 update_attempter.cc:509] Updating boot flags... Apr 21 09:58:28.901602 systemd-networkd[1247]: cilium_vxlan: Link UP Apr 21 09:58:28.901610 systemd-networkd[1247]: cilium_vxlan: Gained carrier Apr 21 09:58:28.954119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (3548) Apr 21 09:58:29.237105 kernel: NET: Registered PF_ALG protocol family Apr 21 09:58:29.444274 systemd-networkd[1247]: cilium_net: Gained IPv6LL Apr 21 09:58:29.572274 systemd-networkd[1247]: cilium_host: Gained IPv6LL Apr 21 09:58:29.976530 systemd-networkd[1247]: lxc_health: Link UP Apr 21 09:58:29.979081 systemd-networkd[1247]: lxc_health: Gained carrier Apr 21 09:58:30.019179 systemd-networkd[1247]: cilium_vxlan: Gained IPv6LL Apr 21 09:58:30.207761 systemd-networkd[1247]: lxc93b2e72764e5: Link UP Apr 21 09:58:30.219117 kernel: eth0: renamed from tmp8f564 Apr 21 09:58:30.226721 systemd-networkd[1247]: lxcd1da44280a2a: Link UP Apr 21 09:58:30.237178 systemd-networkd[1247]: lxc93b2e72764e5: Gained carrier Apr 21 09:58:30.245115 kernel: eth0: renamed from tmpae490 Apr 21 09:58:30.253977 systemd-networkd[1247]: lxcd1da44280a2a: Gained carrier Apr 21 09:58:31.171321 systemd-networkd[1247]: lxc_health: Gained IPv6LL Apr 21 09:58:31.747369 systemd-networkd[1247]: lxcd1da44280a2a: Gained IPv6LL Apr 21 09:58:31.939341 systemd-networkd[1247]: lxc93b2e72764e5: Gained IPv6LL Apr 21 09:58:34.286259 containerd[1603]: time="2026-04-21T09:58:34.286131501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:34.286259 containerd[1603]: time="2026-04-21T09:58:34.286196978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:34.288439 containerd[1603]: time="2026-04-21T09:58:34.286335573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:34.288439 containerd[1603]: time="2026-04-21T09:58:34.286442089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:34.329057 containerd[1603]: time="2026-04-21T09:58:34.328633659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:34.329057 containerd[1603]: time="2026-04-21T09:58:34.328695416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:34.329057 containerd[1603]: time="2026-04-21T09:58:34.328710656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:34.329057 containerd[1603]: time="2026-04-21T09:58:34.328796173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:34.406530 containerd[1603]: time="2026-04-21T09:58:34.406486677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gknrz,Uid:be30f11a-e637-4fa6-ad76-0762228eb9b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f56471ead115295fe64a4927c34e89941a77d7dac609ced461f2c094e5a61e1\"" Apr 21 09:58:34.419050 containerd[1603]: time="2026-04-21T09:58:34.418978618Z" level=info msg="CreateContainer within sandbox \"8f56471ead115295fe64a4927c34e89941a77d7dac609ced461f2c094e5a61e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:58:34.441177 containerd[1603]: time="2026-04-21T09:58:34.441122604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52gbr,Uid:76a87a3d-5205-4752-923b-e1f834a7227a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae490a7851cee04484fb32e454b671b1bf659a186738273718a0e03c498b1022\"" Apr 21 09:58:34.448856 containerd[1603]: time="2026-04-21T09:58:34.448518092Z" level=info msg="CreateContainer within sandbox \"8f56471ead115295fe64a4927c34e89941a77d7dac609ced461f2c094e5a61e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91733cd5e5f0c75bdb041280573fcd4614031f0bb01ac75f18a4937e556aebde\"" Apr 21 09:58:34.451602 containerd[1603]: time="2026-04-21T09:58:34.451553421Z" level=info msg="StartContainer for \"91733cd5e5f0c75bdb041280573fcd4614031f0bb01ac75f18a4937e556aebde\"" Apr 21 09:58:34.453729 containerd[1603]: time="2026-04-21T09:58:34.453693662Z" level=info msg="CreateContainer within sandbox \"ae490a7851cee04484fb32e454b671b1bf659a186738273718a0e03c498b1022\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:58:34.481323 containerd[1603]: time="2026-04-21T09:58:34.481280528Z" level=info msg="CreateContainer within sandbox \"ae490a7851cee04484fb32e454b671b1bf659a186738273718a0e03c498b1022\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4624655127dde99ad7c3d068a1f1a1ce578a7e6f6e4260e75950f62d9de9852\"" Apr 21 09:58:34.482382 containerd[1603]: time="2026-04-21T09:58:34.482355889Z" level=info msg="StartContainer for \"d4624655127dde99ad7c3d068a1f1a1ce578a7e6f6e4260e75950f62d9de9852\"" Apr 21 09:58:34.526710 containerd[1603]: time="2026-04-21T09:58:34.526673820Z" level=info msg="StartContainer for \"91733cd5e5f0c75bdb041280573fcd4614031f0bb01ac75f18a4937e556aebde\" returns successfully" Apr 21 09:58:34.575899 kubelet[2678]: I0421 09:58:34.575185 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gknrz" podStartSLOduration=18.575164277 podStartE2EDuration="18.575164277s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:34.574510461 +0000 UTC m=+23.324819849" watchObservedRunningTime="2026-04-21 09:58:34.575164277 +0000 UTC m=+23.325473665" Apr 21 09:58:34.583048 containerd[1603]: time="2026-04-21T09:58:34.581711797Z" level=info msg="StartContainer for \"d4624655127dde99ad7c3d068a1f1a1ce578a7e6f6e4260e75950f62d9de9852\" returns successfully" Apr 21 09:58:35.578062 kubelet[2678]: I0421 09:58:35.575353 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-52gbr" podStartSLOduration=19.575331596 podStartE2EDuration="19.575331596s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:35.573630375 +0000 UTC m=+24.323939763" watchObservedRunningTime="2026-04-21 09:58:35.575331596 +0000 UTC m=+24.325640984" Apr 21 09:58:42.338745 kubelet[2678]: I0421 09:58:42.338292 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:03.820470 systemd[1]: Started sshd@7-178.104.211.77:22-50.85.169.122:45854.service - OpenSSH per-connection server daemon (50.85.169.122:45854). Apr 21 09:59:03.949350 sshd[4090]: Accepted publickey for core from 50.85.169.122 port 45854 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:03.955094 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:03.962318 systemd-logind[1571]: New session 8 of user core. Apr 21 09:59:03.969923 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 09:59:04.169122 sshd[4090]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:04.176554 systemd[1]: sshd@7-178.104.211.77:22-50.85.169.122:45854.service: Deactivated successfully. Apr 21 09:59:04.178106 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Apr 21 09:59:04.179975 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 09:59:04.182472 systemd-logind[1571]: Removed session 8. Apr 21 09:59:09.191224 systemd[1]: Started sshd@8-178.104.211.77:22-50.85.169.122:45870.service - OpenSSH per-connection server daemon (50.85.169.122:45870). Apr 21 09:59:09.312063 sshd[4106]: Accepted publickey for core from 50.85.169.122 port 45870 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:09.314225 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:09.320169 systemd-logind[1571]: New session 9 of user core. Apr 21 09:59:09.325515 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 09:59:09.508269 sshd[4106]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:09.512336 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Apr 21 09:59:09.512848 systemd[1]: sshd@8-178.104.211.77:22-50.85.169.122:45870.service: Deactivated successfully. Apr 21 09:59:09.518725 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 09:59:09.520855 systemd-logind[1571]: Removed session 9. Apr 21 09:59:14.532640 systemd[1]: Started sshd@9-178.104.211.77:22-50.85.169.122:59252.service - OpenSSH per-connection server daemon (50.85.169.122:59252). Apr 21 09:59:14.659382 sshd[4123]: Accepted publickey for core from 50.85.169.122 port 59252 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:14.661071 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:14.668677 systemd-logind[1571]: New session 10 of user core. Apr 21 09:59:14.676506 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 09:59:14.864341 sshd[4123]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:14.872240 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Apr 21 09:59:14.874532 systemd[1]: sshd@9-178.104.211.77:22-50.85.169.122:59252.service: Deactivated successfully. Apr 21 09:59:14.877987 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 09:59:14.879698 systemd-logind[1571]: Removed session 10. Apr 21 09:59:19.887380 systemd[1]: Started sshd@10-178.104.211.77:22-50.85.169.122:55966.service - OpenSSH per-connection server daemon (50.85.169.122:55966). Apr 21 09:59:20.012621 sshd[4140]: Accepted publickey for core from 50.85.169.122 port 55966 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:20.015463 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:20.021930 systemd-logind[1571]: New session 11 of user core. Apr 21 09:59:20.028941 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 09:59:20.213387 sshd[4140]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:20.220295 systemd[1]: sshd@10-178.104.211.77:22-50.85.169.122:55966.service: Deactivated successfully. Apr 21 09:59:20.224641 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Apr 21 09:59:20.225321 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 09:59:20.234440 systemd[1]: Started sshd@11-178.104.211.77:22-50.85.169.122:55980.service - OpenSSH per-connection server daemon (50.85.169.122:55980). Apr 21 09:59:20.236451 systemd-logind[1571]: Removed session 11. Apr 21 09:59:20.346938 sshd[4155]: Accepted publickey for core from 50.85.169.122 port 55980 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:20.348569 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:20.355932 systemd-logind[1571]: New session 12 of user core. Apr 21 09:59:20.361609 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 09:59:20.603238 sshd[4155]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:20.609034 systemd[1]: sshd@11-178.104.211.77:22-50.85.169.122:55980.service: Deactivated successfully. Apr 21 09:59:20.617838 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 09:59:20.621833 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Apr 21 09:59:20.635776 systemd[1]: Started sshd@12-178.104.211.77:22-50.85.169.122:55988.service - OpenSSH per-connection server daemon (50.85.169.122:55988). Apr 21 09:59:20.638193 systemd-logind[1571]: Removed session 12. Apr 21 09:59:20.766868 sshd[4167]: Accepted publickey for core from 50.85.169.122 port 55988 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:20.769914 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:20.775948 systemd-logind[1571]: New session 13 of user core. Apr 21 09:59:20.784812 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 09:59:20.959349 sshd[4167]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:20.966300 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Apr 21 09:59:20.966584 systemd[1]: sshd@12-178.104.211.77:22-50.85.169.122:55988.service: Deactivated successfully. Apr 21 09:59:20.969885 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 09:59:20.972747 systemd-logind[1571]: Removed session 13. Apr 21 09:59:25.984440 systemd[1]: Started sshd@13-178.104.211.77:22-50.85.169.122:55990.service - OpenSSH per-connection server daemon (50.85.169.122:55990). Apr 21 09:59:26.106006 sshd[4181]: Accepted publickey for core from 50.85.169.122 port 55990 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:26.109068 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:26.114330 systemd-logind[1571]: New session 14 of user core. Apr 21 09:59:26.125583 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 09:59:26.301263 sshd[4181]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:26.307252 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Apr 21 09:59:26.309335 systemd[1]: sshd@13-178.104.211.77:22-50.85.169.122:55990.service: Deactivated successfully. Apr 21 09:59:26.312900 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 09:59:26.315696 systemd-logind[1571]: Removed session 14. Apr 21 09:59:31.328560 systemd[1]: Started sshd@14-178.104.211.77:22-50.85.169.122:41556.service - OpenSSH per-connection server daemon (50.85.169.122:41556). Apr 21 09:59:31.448112 sshd[4196]: Accepted publickey for core from 50.85.169.122 port 41556 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:31.450110 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:31.456340 systemd-logind[1571]: New session 15 of user core. Apr 21 09:59:31.461468 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 09:59:31.637215 sshd[4196]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:31.643197 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Apr 21 09:59:31.644286 systemd[1]: sshd@14-178.104.211.77:22-50.85.169.122:41556.service: Deactivated successfully. Apr 21 09:59:31.649316 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 09:59:31.650661 systemd-logind[1571]: Removed session 15. Apr 21 09:59:31.662403 systemd[1]: Started sshd@15-178.104.211.77:22-50.85.169.122:41570.service - OpenSSH per-connection server daemon (50.85.169.122:41570). Apr 21 09:59:31.788114 sshd[4210]: Accepted publickey for core from 50.85.169.122 port 41570 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:31.790123 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:31.796003 systemd-logind[1571]: New session 16 of user core. Apr 21 09:59:31.801419 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 09:59:32.038677 sshd[4210]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:32.045589 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Apr 21 09:59:32.045738 systemd[1]: sshd@15-178.104.211.77:22-50.85.169.122:41570.service: Deactivated successfully. Apr 21 09:59:32.050877 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 09:59:32.052664 systemd-logind[1571]: Removed session 16. Apr 21 09:59:32.061641 systemd[1]: Started sshd@16-178.104.211.77:22-50.85.169.122:41582.service - OpenSSH per-connection server daemon (50.85.169.122:41582). Apr 21 09:59:32.191765 sshd[4221]: Accepted publickey for core from 50.85.169.122 port 41582 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:32.194346 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:32.199128 systemd-logind[1571]: New session 17 of user core. Apr 21 09:59:32.207388 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 09:59:33.024282 sshd[4221]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:33.034522 systemd[1]: sshd@16-178.104.211.77:22-50.85.169.122:41582.service: Deactivated successfully. Apr 21 09:59:33.036240 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Apr 21 09:59:33.044575 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 09:59:33.048155 systemd-logind[1571]: Removed session 17. Apr 21 09:59:33.058504 systemd[1]: Started sshd@17-178.104.211.77:22-50.85.169.122:41590.service - OpenSSH per-connection server daemon (50.85.169.122:41590). Apr 21 09:59:33.181791 sshd[4240]: Accepted publickey for core from 50.85.169.122 port 41590 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.184066 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.190389 systemd-logind[1571]: New session 18 of user core. Apr 21 09:59:33.196687 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 09:59:33.494978 sshd[4240]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:33.502965 systemd[1]: sshd@17-178.104.211.77:22-50.85.169.122:41590.service: Deactivated successfully. Apr 21 09:59:33.506550 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Apr 21 09:59:33.508462 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 09:59:33.512678 systemd-logind[1571]: Removed session 18. Apr 21 09:59:33.524192 systemd[1]: Started sshd@18-178.104.211.77:22-50.85.169.122:41600.service - OpenSSH per-connection server daemon (50.85.169.122:41600). Apr 21 09:59:33.647603 sshd[4252]: Accepted publickey for core from 50.85.169.122 port 41600 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.650116 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.656604 systemd-logind[1571]: New session 19 of user core. Apr 21 09:59:33.665553 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 09:59:33.841371 sshd[4252]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:33.848333 systemd[1]: sshd@18-178.104.211.77:22-50.85.169.122:41600.service: Deactivated successfully. Apr 21 09:59:33.851809 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Apr 21 09:59:33.853790 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 09:59:33.855397 systemd-logind[1571]: Removed session 19. Apr 21 09:59:38.865689 systemd[1]: Started sshd@19-178.104.211.77:22-50.85.169.122:41616.service - OpenSSH per-connection server daemon (50.85.169.122:41616). Apr 21 09:59:38.988556 sshd[4268]: Accepted publickey for core from 50.85.169.122 port 41616 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:38.989889 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:38.994835 systemd-logind[1571]: New session 20 of user core. Apr 21 09:59:39.006598 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 09:59:39.187404 sshd[4268]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:39.195906 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Apr 21 09:59:39.196454 systemd[1]: sshd@19-178.104.211.77:22-50.85.169.122:41616.service: Deactivated successfully. Apr 21 09:59:39.205115 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 09:59:39.210374 systemd-logind[1571]: Removed session 20. Apr 21 09:59:44.211315 systemd[1]: Started sshd@20-178.104.211.77:22-50.85.169.122:39264.service - OpenSSH per-connection server daemon (50.85.169.122:39264). Apr 21 09:59:44.326826 sshd[4282]: Accepted publickey for core from 50.85.169.122 port 39264 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:44.330277 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:44.335355 systemd-logind[1571]: New session 21 of user core. Apr 21 09:59:44.341684 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 09:59:44.513349 sshd[4282]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:44.521738 systemd[1]: sshd@20-178.104.211.77:22-50.85.169.122:39264.service: Deactivated successfully. Apr 21 09:59:44.522540 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Apr 21 09:59:44.526320 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 09:59:44.527890 systemd-logind[1571]: Removed session 21. Apr 21 09:59:49.537348 systemd[1]: Started sshd@21-178.104.211.77:22-50.85.169.122:43534.service - OpenSSH per-connection server daemon (50.85.169.122:43534). Apr 21 09:59:49.658226 sshd[4298]: Accepted publickey for core from 50.85.169.122 port 43534 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:49.660752 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:49.666672 systemd-logind[1571]: New session 22 of user core. Apr 21 09:59:49.672461 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 09:59:49.840366 sshd[4298]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:49.845654 systemd[1]: sshd@21-178.104.211.77:22-50.85.169.122:43534.service: Deactivated successfully. Apr 21 09:59:49.849696 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Apr 21 09:59:49.850467 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 09:59:49.853705 systemd-logind[1571]: Removed session 22. Apr 21 09:59:49.861423 systemd[1]: Started sshd@22-178.104.211.77:22-50.85.169.122:43538.service - OpenSSH per-connection server daemon (50.85.169.122:43538). Apr 21 09:59:49.979668 sshd[4312]: Accepted publickey for core from 50.85.169.122 port 43538 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:49.983325 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:49.989104 systemd-logind[1571]: New session 23 of user core. Apr 21 09:59:49.997558 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 09:59:51.416045 containerd[1603]: time="2026-04-21T09:59:51.415978951Z" level=info msg="StopContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" with timeout 30 (s)" Apr 21 09:59:51.419554 containerd[1603]: time="2026-04-21T09:59:51.419406310Z" level=info msg="Stop container \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" with signal terminated" Apr 21 09:59:51.455298 containerd[1603]: time="2026-04-21T09:59:51.454919077Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 09:59:51.465533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96-rootfs.mount: Deactivated successfully. Apr 21 09:59:51.469474 containerd[1603]: time="2026-04-21T09:59:51.469115480Z" level=info msg="StopContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" with timeout 2 (s)" Apr 21 09:59:51.469474 containerd[1603]: time="2026-04-21T09:59:51.469412763Z" level=info msg="Stop container \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" with signal terminated" Apr 21 09:59:51.478261 systemd-networkd[1247]: lxc_health: Link DOWN Apr 21 09:59:51.478270 systemd-networkd[1247]: lxc_health: Lost carrier Apr 21 09:59:51.485725 containerd[1603]: time="2026-04-21T09:59:51.483965930Z" level=info msg="shim disconnected" id=c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96 namespace=k8s.io Apr 21 09:59:51.485725 containerd[1603]: time="2026-04-21T09:59:51.484041131Z" level=warning msg="cleaning up after shim disconnected" id=c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96 namespace=k8s.io Apr 21 09:59:51.485725 containerd[1603]: time="2026-04-21T09:59:51.484053571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:51.516339 kubelet[2678]: E0421 09:59:51.516298 2678 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 09:59:51.518455 containerd[1603]: time="2026-04-21T09:59:51.517981400Z" level=info msg="StopContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" returns successfully" Apr 21 09:59:51.519007 containerd[1603]: time="2026-04-21T09:59:51.518775729Z" level=info msg="StopPodSandbox for \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\"" Apr 21 09:59:51.519007 containerd[1603]: time="2026-04-21T09:59:51.518815289Z" level=info msg="Container to stop \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.521615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d-shm.mount: Deactivated successfully. Apr 21 09:59:51.536471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193-rootfs.mount: Deactivated successfully. Apr 21 09:59:51.548050 containerd[1603]: time="2026-04-21T09:59:51.547866622Z" level=info msg="shim disconnected" id=15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193 namespace=k8s.io Apr 21 09:59:51.548050 containerd[1603]: time="2026-04-21T09:59:51.547940263Z" level=warning msg="cleaning up after shim disconnected" id=15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193 namespace=k8s.io Apr 21 09:59:51.548050 containerd[1603]: time="2026-04-21T09:59:51.547948783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:51.564850 containerd[1603]: time="2026-04-21T09:59:51.564788376Z" level=info msg="shim disconnected" id=0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d namespace=k8s.io Apr 21 09:59:51.564850 containerd[1603]: time="2026-04-21T09:59:51.564842817Z" level=warning msg="cleaning up after shim disconnected" id=0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d namespace=k8s.io Apr 21 09:59:51.564850 containerd[1603]: time="2026-04-21T09:59:51.564851337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:51.569966 containerd[1603]: time="2026-04-21T09:59:51.569882154Z" level=info msg="StopContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" returns successfully" Apr 21 09:59:51.570709 containerd[1603]: time="2026-04-21T09:59:51.570685004Z" level=info msg="StopPodSandbox for \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\"" Apr 21 09:59:51.570965 containerd[1603]: time="2026-04-21T09:59:51.570933967Z" level=info msg="Container to stop \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.571106 containerd[1603]: time="2026-04-21T09:59:51.571084968Z" level=info msg="Container to stop \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.571237 containerd[1603]: time="2026-04-21T09:59:51.571152169Z" level=info msg="Container to stop \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.571237 containerd[1603]: time="2026-04-21T09:59:51.571178489Z" level=info msg="Container to stop \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.571237 containerd[1603]: time="2026-04-21T09:59:51.571189649Z" level=info msg="Container to stop \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 09:59:51.585232 containerd[1603]: time="2026-04-21T09:59:51.585194290Z" level=info msg="TearDown network for sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" successfully" Apr 21 09:59:51.585455 containerd[1603]: time="2026-04-21T09:59:51.585352052Z" level=info msg="StopPodSandbox for \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" returns successfully" Apr 21 09:59:51.616060 containerd[1603]: time="2026-04-21T09:59:51.615889882Z" level=info msg="shim disconnected" id=745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd namespace=k8s.io Apr 21 09:59:51.616478 containerd[1603]: time="2026-04-21T09:59:51.616150885Z" level=warning msg="cleaning up after shim disconnected" id=745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd namespace=k8s.io Apr 21 09:59:51.616478 containerd[1603]: time="2026-04-21T09:59:51.616182645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:51.629353 containerd[1603]: time="2026-04-21T09:59:51.629246275Z" level=info msg="TearDown network for sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" successfully" Apr 21 09:59:51.629353 containerd[1603]: time="2026-04-21T09:59:51.629284835Z" level=info msg="StopPodSandbox for \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" returns successfully" Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.732802 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-lib-modules\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.732884 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-config-path\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.732946 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-bpf-maps\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.732986 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzpk4\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-kube-api-access-pzpk4\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.733051 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-run\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733294 kubelet[2678]: I0421 09:59:51.733070 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733089 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcq8d\" (UniqueName: \"kubernetes.io/projected/7a908e24-c00a-4505-9408-f2cc839230a9-kube-api-access-fcq8d\") pod \"7a908e24-c00a-4505-9408-f2cc839230a9\" (UID: \"7a908e24-c00a-4505-9408-f2cc839230a9\") " Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733188 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-kernel\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733233 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a17d613a-0513-4d46-a286-dab01a85d70c-clustermesh-secrets\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733272 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cni-path\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733318 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-hubble-tls\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.733795 kubelet[2678]: I0421 09:59:51.733363 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-xtables-lock\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733402 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-cgroup\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733476 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-etc-cni-netd\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733515 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-hostproc\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733563 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-net\") pod \"a17d613a-0513-4d46-a286-dab01a85d70c\" (UID: \"a17d613a-0513-4d46-a286-dab01a85d70c\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733604 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a908e24-c00a-4505-9408-f2cc839230a9-cilium-config-path\") pod \"7a908e24-c00a-4505-9408-f2cc839230a9\" (UID: \"7a908e24-c00a-4505-9408-f2cc839230a9\") " Apr 21 09:59:51.734429 kubelet[2678]: I0421 09:59:51.733681 2678 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-lib-modules\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.734732 kubelet[2678]: I0421 09:59:51.734244 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742257 kubelet[2678]: I0421 09:59:51.740073 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742257 kubelet[2678]: I0421 09:59:51.740114 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742257 kubelet[2678]: I0421 09:59:51.740167 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742257 kubelet[2678]: I0421 09:59:51.741453 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742257 kubelet[2678]: I0421 09:59:51.741508 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742607 kubelet[2678]: I0421 09:59:51.741525 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742607 kubelet[2678]: I0421 09:59:51.741541 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742607 kubelet[2678]: I0421 09:59:51.741582 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 09:59:51.742607 kubelet[2678]: I0421 09:59:51.741790 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a908e24-c00a-4505-9408-f2cc839230a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a908e24-c00a-4505-9408-f2cc839230a9" (UID: "7a908e24-c00a-4505-9408-f2cc839230a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:59:51.747976 kubelet[2678]: I0421 09:59:51.747658 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:59:51.753359 kubelet[2678]: I0421 09:59:51.752674 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-kube-api-access-pzpk4" (OuterVolumeSpecName: "kube-api-access-pzpk4") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "kube-api-access-pzpk4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 09:59:51.753359 kubelet[2678]: I0421 09:59:51.752771 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 09:59:51.753753 kubelet[2678]: I0421 09:59:51.753723 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a908e24-c00a-4505-9408-f2cc839230a9-kube-api-access-fcq8d" (OuterVolumeSpecName: "kube-api-access-fcq8d") pod "7a908e24-c00a-4505-9408-f2cc839230a9" (UID: "7a908e24-c00a-4505-9408-f2cc839230a9"). InnerVolumeSpecName "kube-api-access-fcq8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 09:59:51.753817 kubelet[2678]: I0421 09:59:51.753792 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a17d613a-0513-4d46-a286-dab01a85d70c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a17d613a-0513-4d46-a286-dab01a85d70c" (UID: "a17d613a-0513-4d46-a286-dab01a85d70c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 09:59:51.760295 kubelet[2678]: I0421 09:59:51.760198 2678 scope.go:117] "RemoveContainer" containerID="c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96" Apr 21 09:59:51.768779 containerd[1603]: time="2026-04-21T09:59:51.766711570Z" level=info msg="RemoveContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\"" Apr 21 09:59:51.779051 containerd[1603]: time="2026-04-21T09:59:51.777538494Z" level=info msg="RemoveContainer for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" returns successfully" Apr 21 09:59:51.780792 kubelet[2678]: I0421 09:59:51.780693 2678 scope.go:117] "RemoveContainer" containerID="c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96" Apr 21 09:59:51.781818 containerd[1603]: time="2026-04-21T09:59:51.781774143Z" level=error msg="ContainerStatus for \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\": not found" Apr 21 09:59:51.782242 kubelet[2678]: E0421 09:59:51.782087 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\": not found" containerID="c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96" Apr 21 09:59:51.782242 kubelet[2678]: I0421 09:59:51.782119 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96"} err="failed to get container status \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4600a527cef32447ac5151e915f9b724de98e172d7c03585369fc3cee1a1e96\": not found" Apr 21 09:59:51.782242 kubelet[2678]: I0421 09:59:51.782153 2678 scope.go:117] "RemoveContainer" containerID="15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193" Apr 21 09:59:51.784872 containerd[1603]: time="2026-04-21T09:59:51.784836378Z" level=info msg="RemoveContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\"" Apr 21 09:59:51.785426 kubelet[2678]: E0421 09:59:51.785402 2678 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/poda17d613a-0513-4d46-a286-dab01a85d70c\": RecentStats: unable to find data in memory cache]" Apr 21 09:59:51.792802 containerd[1603]: time="2026-04-21T09:59:51.792762789Z" level=info msg="RemoveContainer for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" returns successfully" Apr 21 09:59:51.793437 kubelet[2678]: I0421 09:59:51.793336 2678 scope.go:117] "RemoveContainer" containerID="b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a" Apr 21 09:59:51.795801 containerd[1603]: time="2026-04-21T09:59:51.795513820Z" level=info msg="RemoveContainer for \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\"" Apr 21 09:59:51.799122 containerd[1603]: time="2026-04-21T09:59:51.799003580Z" level=info msg="RemoveContainer for \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\" returns successfully" Apr 21 09:59:51.799592 kubelet[2678]: I0421 09:59:51.799493 2678 scope.go:117] "RemoveContainer" containerID="6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa" Apr 21 09:59:51.800713 containerd[1603]: time="2026-04-21T09:59:51.800634239Z" level=info msg="RemoveContainer for \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\"" Apr 21 09:59:51.806721 containerd[1603]: time="2026-04-21T09:59:51.806459026Z" level=info msg="RemoveContainer for \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\" returns successfully" Apr 21 09:59:51.812299 kubelet[2678]: I0421 09:59:51.811883 2678 scope.go:117] "RemoveContainer" containerID="f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643" Apr 21 09:59:51.814689 containerd[1603]: time="2026-04-21T09:59:51.814648200Z" level=info msg="RemoveContainer for \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\"" Apr 21 09:59:51.821162 containerd[1603]: time="2026-04-21T09:59:51.821128554Z" level=info msg="RemoveContainer for \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\" returns successfully" Apr 21 09:59:51.821720 kubelet[2678]: I0421 09:59:51.821585 2678 scope.go:117] "RemoveContainer" containerID="0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9" Apr 21 09:59:51.824044 containerd[1603]: time="2026-04-21T09:59:51.823970466Z" level=info msg="RemoveContainer for \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\"" Apr 21 09:59:51.829291 containerd[1603]: time="2026-04-21T09:59:51.829134766Z" level=info msg="RemoveContainer for \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\" returns successfully" Apr 21 09:59:51.829745 kubelet[2678]: I0421 09:59:51.829680 2678 scope.go:117] "RemoveContainer" containerID="15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193" Apr 21 09:59:51.830086 containerd[1603]: time="2026-04-21T09:59:51.830009376Z" level=error msg="ContainerStatus for \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\": not found" Apr 21 09:59:51.830388 kubelet[2678]: E0421 09:59:51.830247 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\": not found" containerID="15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193" Apr 21 09:59:51.830388 kubelet[2678]: I0421 09:59:51.830277 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193"} err="failed to get container status \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\": rpc error: code = NotFound desc = an error occurred when try to find container \"15c9300aecb5f2511ad55e1a4ff9fd0a038987a83899134c37cf722e9fe5d193\": not found" Apr 21 09:59:51.830388 kubelet[2678]: I0421 09:59:51.830302 2678 scope.go:117] "RemoveContainer" containerID="b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a" Apr 21 09:59:51.830500 containerd[1603]: time="2026-04-21T09:59:51.830447381Z" level=error msg="ContainerStatus for \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\": not found" Apr 21 09:59:51.830772 kubelet[2678]: E0421 09:59:51.830624 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\": not found" containerID="b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a" Apr 21 09:59:51.830772 kubelet[2678]: I0421 09:59:51.830682 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a"} err="failed to get container status \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5427daa1aca5a814985cf54ad2afef1eb49d6a9062c7dd67b2b89d3240e184a\": not found" Apr 21 09:59:51.830772 kubelet[2678]: I0421 09:59:51.830698 2678 scope.go:117] "RemoveContainer" containerID="6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa" Apr 21 09:59:51.830894 containerd[1603]: time="2026-04-21T09:59:51.830852305Z" level=error msg="ContainerStatus for \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\": not found" Apr 21 09:59:51.831361 kubelet[2678]: E0421 09:59:51.831089 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\": not found" containerID="6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa" Apr 21 09:59:51.831361 kubelet[2678]: I0421 09:59:51.831115 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa"} err="failed to get container status \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6651bceef6c6b092f9aa9c1c6945227340e0daca0954492cf7f5cb855e08e8aa\": not found" Apr 21 09:59:51.831361 kubelet[2678]: I0421 09:59:51.831130 2678 scope.go:117] "RemoveContainer" containerID="f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643" Apr 21 09:59:51.831488 containerd[1603]: time="2026-04-21T09:59:51.831299910Z" level=error msg="ContainerStatus for \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\": not found" Apr 21 09:59:51.831704 kubelet[2678]: E0421 09:59:51.831588 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\": not found" containerID="f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643" Apr 21 09:59:51.831704 kubelet[2678]: I0421 09:59:51.831616 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643"} err="failed to get container status \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\": rpc error: code = NotFound desc = an error occurred when try to find container \"f46a5baea2ad77e75b15a8d50fc2054f135580031611e1a295e58ef30eb68643\": not found" Apr 21 09:59:51.831704 kubelet[2678]: I0421 09:59:51.831631 2678 scope.go:117] "RemoveContainer" containerID="0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9" Apr 21 09:59:51.831812 containerd[1603]: time="2026-04-21T09:59:51.831787356Z" level=error msg="ContainerStatus for \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\": not found" Apr 21 09:59:51.831975 kubelet[2678]: E0421 09:59:51.831893 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\": not found" containerID="0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9" Apr 21 09:59:51.831975 kubelet[2678]: I0421 09:59:51.831967 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9"} err="failed to get container status \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"0dc6e2a1e4718055963db98f3cc2667582a389ec0537874ffe27773bacc923f9\": not found" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834354 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-config-path\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834390 2678 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-bpf-maps\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834405 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzpk4\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-kube-api-access-pzpk4\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834421 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-run\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834441 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcq8d\" (UniqueName: \"kubernetes.io/projected/7a908e24-c00a-4505-9408-f2cc839230a9-kube-api-access-fcq8d\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834455 2678 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-kernel\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834471 2678 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a17d613a-0513-4d46-a286-dab01a85d70c-clustermesh-secrets\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834607 kubelet[2678]: I0421 09:59:51.834486 2678 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cni-path\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834499 2678 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a17d613a-0513-4d46-a286-dab01a85d70c-hubble-tls\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834514 2678 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-xtables-lock\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834528 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-cilium-cgroup\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834543 2678 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-etc-cni-netd\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834556 2678 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-hostproc\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834569 2678 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a17d613a-0513-4d46-a286-dab01a85d70c-host-proc-sys-net\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:51.834953 kubelet[2678]: I0421 09:59:51.834584 2678 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a908e24-c00a-4505-9408-f2cc839230a9-cilium-config-path\") on node \"ci-4081-3-7-7-fa740892b3\" DevicePath \"\"" Apr 21 09:59:52.430334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d-rootfs.mount: Deactivated successfully. Apr 21 09:59:52.431083 systemd[1]: var-lib-kubelet-pods-7a908e24\x2dc00a\x2d4505\x2d9408\x2df2cc839230a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcq8d.mount: Deactivated successfully. Apr 21 09:59:52.431233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd-rootfs.mount: Deactivated successfully. Apr 21 09:59:52.431333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd-shm.mount: Deactivated successfully. Apr 21 09:59:52.431416 systemd[1]: var-lib-kubelet-pods-a17d613a\x2d0513\x2d4d46\x2da286\x2ddab01a85d70c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpzpk4.mount: Deactivated successfully. Apr 21 09:59:52.431503 systemd[1]: var-lib-kubelet-pods-a17d613a\x2d0513\x2d4d46\x2da286\x2ddab01a85d70c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 09:59:52.431639 systemd[1]: var-lib-kubelet-pods-a17d613a\x2d0513\x2d4d46\x2da286\x2ddab01a85d70c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 09:59:53.361332 sshd[4312]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:53.366829 systemd[1]: sshd@22-178.104.211.77:22-50.85.169.122:43538.service: Deactivated successfully. Apr 21 09:59:53.373558 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 09:59:53.377066 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Apr 21 09:59:53.384480 systemd[1]: Started sshd@23-178.104.211.77:22-50.85.169.122:43548.service - OpenSSH per-connection server daemon (50.85.169.122:43548). Apr 21 09:59:53.387203 systemd-logind[1571]: Removed session 23. Apr 21 09:59:53.405142 kubelet[2678]: I0421 09:59:53.405091 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a908e24-c00a-4505-9408-f2cc839230a9" path="/var/lib/kubelet/pods/7a908e24-c00a-4505-9408-f2cc839230a9/volumes" Apr 21 09:59:53.405741 kubelet[2678]: I0421 09:59:53.405481 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17d613a-0513-4d46-a286-dab01a85d70c" path="/var/lib/kubelet/pods/a17d613a-0513-4d46-a286-dab01a85d70c/volumes" Apr 21 09:59:53.509718 sshd[4484]: Accepted publickey for core from 50.85.169.122 port 43548 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:53.511482 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:53.517049 systemd-logind[1571]: New session 24 of user core. Apr 21 09:59:53.523652 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 09:59:54.045533 kubelet[2678]: I0421 09:59:54.045295 2678 setters.go:618] "Node became not ready" node="ci-4081-3-7-7-fa740892b3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T09:59:54Z","lastTransitionTime":"2026-04-21T09:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 09:59:55.161688 sshd[4484]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:55.176611 systemd[1]: sshd@23-178.104.211.77:22-50.85.169.122:43548.service: Deactivated successfully. Apr 21 09:59:55.178685 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Apr 21 09:59:55.199291 systemd[1]: Started sshd@24-178.104.211.77:22-50.85.169.122:43556.service - OpenSSH per-connection server daemon (50.85.169.122:43556). Apr 21 09:59:55.199603 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 09:59:55.202398 systemd-logind[1571]: Removed session 24. Apr 21 09:59:55.341389 sshd[4496]: Accepted publickey for core from 50.85.169.122 port 43556 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:55.345271 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:55.350983 systemd-logind[1571]: New session 25 of user core. Apr 21 09:59:55.359589 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360525 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aaee9bc-8898-401a-9d31-4fdc92f04c75-cilium-config-path\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360588 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-hostproc\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360616 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-cilium-run\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360645 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-etc-cni-netd\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360758 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lnz\" (UniqueName: \"kubernetes.io/projected/8aaee9bc-8898-401a-9d31-4fdc92f04c75-kube-api-access-c7lnz\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.361432 kubelet[2678]: I0421 09:59:55.360892 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-bpf-maps\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.360960 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-cni-path\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.360994 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aaee9bc-8898-401a-9d31-4fdc92f04c75-clustermesh-secrets\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.361079 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8aaee9bc-8898-401a-9d31-4fdc92f04c75-cilium-ipsec-secrets\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.361131 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-host-proc-sys-net\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.361181 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-host-proc-sys-kernel\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362267 kubelet[2678]: I0421 09:59:55.361213 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aaee9bc-8898-401a-9d31-4fdc92f04c75-hubble-tls\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362536 kubelet[2678]: I0421 09:59:55.361240 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-cilium-cgroup\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.362536 kubelet[2678]: I0421 09:59:55.362331 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-lib-modules\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.364312 kubelet[2678]: I0421 09:59:55.364082 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aaee9bc-8898-401a-9d31-4fdc92f04c75-xtables-lock\") pod \"cilium-dg9cl\" (UID: \"8aaee9bc-8898-401a-9d31-4fdc92f04c75\") " pod="kube-system/cilium-dg9cl" Apr 21 09:59:55.461194 sshd[4496]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:55.480178 systemd[1]: sshd@24-178.104.211.77:22-50.85.169.122:43556.service: Deactivated successfully. Apr 21 09:59:55.505395 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 09:59:55.515856 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Apr 21 09:59:55.525482 systemd[1]: Started sshd@25-178.104.211.77:22-50.85.169.122:43558.service - OpenSSH per-connection server daemon (50.85.169.122:43558). Apr 21 09:59:55.528888 systemd-logind[1571]: Removed session 25. Apr 21 09:59:55.531424 containerd[1603]: time="2026-04-21T09:59:55.531386684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg9cl,Uid:8aaee9bc-8898-401a-9d31-4fdc92f04c75,Namespace:kube-system,Attempt:0,}" Apr 21 09:59:55.558695 containerd[1603]: time="2026-04-21T09:59:55.558332813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:55.558695 containerd[1603]: time="2026-04-21T09:59:55.558413774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:55.558695 containerd[1603]: time="2026-04-21T09:59:55.558530495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:55.559506 containerd[1603]: time="2026-04-21T09:59:55.559316304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:55.603751 containerd[1603]: time="2026-04-21T09:59:55.602397727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg9cl,Uid:8aaee9bc-8898-401a-9d31-4fdc92f04c75,Namespace:kube-system,Attempt:0,} returns sandbox id \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\"" Apr 21 09:59:55.611353 containerd[1603]: time="2026-04-21T09:59:55.611316463Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 09:59:55.628799 containerd[1603]: time="2026-04-21T09:59:55.628672409Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e49c23466626f8de40464382397f9f10725e3583bc4425c1f72aa4d8fb98d37\"" Apr 21 09:59:55.631377 containerd[1603]: time="2026-04-21T09:59:55.631082355Z" level=info msg="StartContainer for \"1e49c23466626f8de40464382397f9f10725e3583bc4425c1f72aa4d8fb98d37\"" Apr 21 09:59:55.644961 sshd[4509]: Accepted publickey for core from 50.85.169.122 port 43558 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:55.647177 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:55.655313 systemd-logind[1571]: New session 26 of user core. Apr 21 09:59:55.661827 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 09:59:55.694425 containerd[1603]: time="2026-04-21T09:59:55.694313075Z" level=info msg="StartContainer for \"1e49c23466626f8de40464382397f9f10725e3583bc4425c1f72aa4d8fb98d37\" returns successfully" Apr 21 09:59:55.739436 containerd[1603]: time="2026-04-21T09:59:55.739206358Z" level=info msg="shim disconnected" id=1e49c23466626f8de40464382397f9f10725e3583bc4425c1f72aa4d8fb98d37 namespace=k8s.io Apr 21 09:59:55.739436 containerd[1603]: time="2026-04-21T09:59:55.739270398Z" level=warning msg="cleaning up after shim disconnected" id=1e49c23466626f8de40464382397f9f10725e3583bc4425c1f72aa4d8fb98d37 namespace=k8s.io Apr 21 09:59:55.739436 containerd[1603]: time="2026-04-21T09:59:55.739282358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:55.798757 containerd[1603]: time="2026-04-21T09:59:55.797979189Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 09:59:55.824809 containerd[1603]: time="2026-04-21T09:59:55.824706757Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a43fa14ed11e7ac79da46fce50d5b36be372f9fb47d1de0cec87b152ac339eb5\"" Apr 21 09:59:55.828036 containerd[1603]: time="2026-04-21T09:59:55.826058291Z" level=info msg="StartContainer for \"a43fa14ed11e7ac79da46fce50d5b36be372f9fb47d1de0cec87b152ac339eb5\"" Apr 21 09:59:55.902005 containerd[1603]: time="2026-04-21T09:59:55.901915667Z" level=info msg="StartContainer for \"a43fa14ed11e7ac79da46fce50d5b36be372f9fb47d1de0cec87b152ac339eb5\" returns successfully" Apr 21 09:59:55.940108 containerd[1603]: time="2026-04-21T09:59:55.940045037Z" level=info msg="shim disconnected" id=a43fa14ed11e7ac79da46fce50d5b36be372f9fb47d1de0cec87b152ac339eb5 namespace=k8s.io Apr 21 09:59:55.940108 containerd[1603]: time="2026-04-21T09:59:55.940100637Z" level=warning msg="cleaning up after shim disconnected" id=a43fa14ed11e7ac79da46fce50d5b36be372f9fb47d1de0cec87b152ac339eb5 namespace=k8s.io Apr 21 09:59:55.940108 containerd[1603]: time="2026-04-21T09:59:55.940109397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:56.518306 kubelet[2678]: E0421 09:59:56.518161 2678 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 09:59:56.803457 containerd[1603]: time="2026-04-21T09:59:56.803317705Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 09:59:56.824082 containerd[1603]: time="2026-04-21T09:59:56.823667320Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba\"" Apr 21 09:59:56.824733 containerd[1603]: time="2026-04-21T09:59:56.824696851Z" level=info msg="StartContainer for \"01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba\"" Apr 21 09:59:56.894050 containerd[1603]: time="2026-04-21T09:59:56.892920013Z" level=info msg="StartContainer for \"01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba\" returns successfully" Apr 21 09:59:56.927519 containerd[1603]: time="2026-04-21T09:59:56.927460299Z" level=info msg="shim disconnected" id=01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba namespace=k8s.io Apr 21 09:59:56.927519 containerd[1603]: time="2026-04-21T09:59:56.927519060Z" level=warning msg="cleaning up after shim disconnected" id=01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba namespace=k8s.io Apr 21 09:59:56.927519 containerd[1603]: time="2026-04-21T09:59:56.927528980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:57.471551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01b0674cc7f84fb66c604f4fc2e32ee2d3956de4e2eac63b2589feb4a35157ba-rootfs.mount: Deactivated successfully. Apr 21 09:59:57.807127 containerd[1603]: time="2026-04-21T09:59:57.806311475Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 09:59:57.836756 containerd[1603]: time="2026-04-21T09:59:57.836641271Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746\"" Apr 21 09:59:57.839044 containerd[1603]: time="2026-04-21T09:59:57.837503200Z" level=info msg="StartContainer for \"27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746\"" Apr 21 09:59:57.989925 containerd[1603]: time="2026-04-21T09:59:57.989886229Z" level=info msg="StartContainer for \"27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746\" returns successfully" Apr 21 09:59:58.022539 containerd[1603]: time="2026-04-21T09:59:58.022457445Z" level=info msg="shim disconnected" id=27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746 namespace=k8s.io Apr 21 09:59:58.023109 containerd[1603]: time="2026-04-21T09:59:58.022919090Z" level=warning msg="cleaning up after shim disconnected" id=27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746 namespace=k8s.io Apr 21 09:59:58.023238 containerd[1603]: time="2026-04-21T09:59:58.023011811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:59:58.472548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ebf2be73a89a27cb50fe882c28e660cad06a02c9afa7f1dffddd47cfab8746-rootfs.mount: Deactivated successfully. Apr 21 09:59:58.814554 containerd[1603]: time="2026-04-21T09:59:58.814450501Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 09:59:58.832399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043656887.mount: Deactivated successfully. Apr 21 09:59:58.838524 containerd[1603]: time="2026-04-21T09:59:58.838366627Z" level=info msg="CreateContainer within sandbox \"70d2ed6534b037d3f3b006478a72aa1d83f88cc5a273e37d683d6e1a1e2644f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6008bff9c83e44fbf5008f854453b70b4b64cdc97f61f26a7001425d75a68464\"" Apr 21 09:59:58.839545 containerd[1603]: time="2026-04-21T09:59:58.839341037Z" level=info msg="StartContainer for \"6008bff9c83e44fbf5008f854453b70b4b64cdc97f61f26a7001425d75a68464\"" Apr 21 09:59:58.906049 containerd[1603]: time="2026-04-21T09:59:58.905874001Z" level=info msg="StartContainer for \"6008bff9c83e44fbf5008f854453b70b4b64cdc97f61f26a7001425d75a68464\" returns successfully" Apr 21 09:59:59.217068 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 21 09:59:59.835539 kubelet[2678]: I0421 09:59:59.835452 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dg9cl" podStartSLOduration=4.835429986 podStartE2EDuration="4.835429986s" podCreationTimestamp="2026-04-21 09:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:59.832643157 +0000 UTC m=+108.582952545" watchObservedRunningTime="2026-04-21 09:59:59.835429986 +0000 UTC m=+108.585739414" Apr 21 10:00:02.188010 systemd-networkd[1247]: lxc_health: Link UP Apr 21 10:00:02.208390 systemd-networkd[1247]: lxc_health: Gained carrier Apr 21 10:00:02.404220 systemd[1]: run-containerd-runc-k8s.io-6008bff9c83e44fbf5008f854453b70b4b64cdc97f61f26a7001425d75a68464-runc.IGNiN8.mount: Deactivated successfully. Apr 21 10:00:03.715308 systemd-networkd[1247]: lxc_health: Gained IPv6LL Apr 21 10:00:09.014451 sshd[4509]: pam_unix(sshd:session): session closed for user core Apr 21 10:00:09.020733 systemd[1]: sshd@25-178.104.211.77:22-50.85.169.122:43558.service: Deactivated successfully. Apr 21 10:00:09.025797 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 10:00:09.028557 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Apr 21 10:00:09.030523 systemd-logind[1571]: Removed session 26. Apr 21 10:00:11.406413 containerd[1603]: time="2026-04-21T10:00:11.406206212Z" level=info msg="StopPodSandbox for \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\"" Apr 21 10:00:11.406413 containerd[1603]: time="2026-04-21T10:00:11.406325413Z" level=info msg="TearDown network for sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" successfully" Apr 21 10:00:11.406413 containerd[1603]: time="2026-04-21T10:00:11.406339173Z" level=info msg="StopPodSandbox for \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" returns successfully" Apr 21 10:00:11.407788 containerd[1603]: time="2026-04-21T10:00:11.407692584Z" level=info msg="RemovePodSandbox for \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\"" Apr 21 10:00:11.407788 containerd[1603]: time="2026-04-21T10:00:11.407748065Z" level=info msg="Forcibly stopping sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\"" Apr 21 10:00:11.407976 containerd[1603]: time="2026-04-21T10:00:11.407827866Z" level=info msg="TearDown network for sandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" successfully" Apr 21 10:00:11.413807 containerd[1603]: time="2026-04-21T10:00:11.413568595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:00:11.413807 containerd[1603]: time="2026-04-21T10:00:11.413644316Z" level=info msg="RemovePodSandbox \"0d9e35e3a05fb01ce78f0113710b4057edb5dc3795e1b2e988c1ff9b1a532d9d\" returns successfully" Apr 21 10:00:11.414631 containerd[1603]: time="2026-04-21T10:00:11.414478363Z" level=info msg="StopPodSandbox for \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\"" Apr 21 10:00:11.414631 containerd[1603]: time="2026-04-21T10:00:11.414562124Z" level=info msg="TearDown network for sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" successfully" Apr 21 10:00:11.414631 containerd[1603]: time="2026-04-21T10:00:11.414573604Z" level=info msg="StopPodSandbox for \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" returns successfully" Apr 21 10:00:11.415004 containerd[1603]: time="2026-04-21T10:00:11.414960927Z" level=info msg="RemovePodSandbox for \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\"" Apr 21 10:00:11.415004 containerd[1603]: time="2026-04-21T10:00:11.414992048Z" level=info msg="Forcibly stopping sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\"" Apr 21 10:00:11.415142 containerd[1603]: time="2026-04-21T10:00:11.415120129Z" level=info msg="TearDown network for sandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" successfully" Apr 21 10:00:11.420636 containerd[1603]: time="2026-04-21T10:00:11.420577136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:00:11.420636 containerd[1603]: time="2026-04-21T10:00:11.420644817Z" level=info msg="RemovePodSandbox \"745b75f72cd057c59e21cdebf6daa5e2ad9d1191de7264e8ab67ab77a469aecd\" returns successfully" Apr 21 10:00:24.472162 kubelet[2678]: E0421 10:00:24.471386 2678 controller.go:195] "Failed to update lease" err="Put \"https://178.104.211.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-fa740892b3?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:00:24.853195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011-rootfs.mount: Deactivated successfully. Apr 21 10:00:24.857543 containerd[1603]: time="2026-04-21T10:00:24.857272267Z" level=info msg="shim disconnected" id=36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011 namespace=k8s.io Apr 21 10:00:24.857543 containerd[1603]: time="2026-04-21T10:00:24.857354788Z" level=warning msg="cleaning up after shim disconnected" id=36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011 namespace=k8s.io Apr 21 10:00:24.857543 containerd[1603]: time="2026-04-21T10:00:24.857363428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:00:24.886391 kubelet[2678]: I0421 10:00:24.886348 2678 scope.go:117] "RemoveContainer" containerID="36a84a48c81195b11a4947b9d46e2b55f6e7c52433e91e554962b0887f63b011" Apr 21 10:00:24.889900 containerd[1603]: time="2026-04-21T10:00:24.889639993Z" level=info msg="CreateContainer within sandbox \"491d42d5e8784ea0a92ebdf6e45da82d731652f69d630164f4394e969f992521\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:00:24.907809 kubelet[2678]: E0421 10:00:24.906989 2678 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35992->10.0.0.2:2379: read: connection timed out" Apr 21 10:00:24.914452 containerd[1603]: time="2026-04-21T10:00:24.914327180Z" level=info msg="CreateContainer within sandbox \"491d42d5e8784ea0a92ebdf6e45da82d731652f69d630164f4394e969f992521\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"04e5fa41fe88a2d81adc61e722c50612980084a999a4e76c64ac23bc22128464\"" Apr 21 10:00:24.915368 containerd[1603]: time="2026-04-21T10:00:24.915002945Z" level=info msg="StartContainer for \"04e5fa41fe88a2d81adc61e722c50612980084a999a4e76c64ac23bc22128464\"" Apr 21 10:00:24.947491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b-rootfs.mount: Deactivated successfully. Apr 21 10:00:24.954398 containerd[1603]: time="2026-04-21T10:00:24.954234243Z" level=info msg="shim disconnected" id=9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b namespace=k8s.io Apr 21 10:00:24.954398 containerd[1603]: time="2026-04-21T10:00:24.954307244Z" level=warning msg="cleaning up after shim disconnected" id=9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b namespace=k8s.io Apr 21 10:00:24.954398 containerd[1603]: time="2026-04-21T10:00:24.954316644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:00:25.006519 containerd[1603]: time="2026-04-21T10:00:25.005875555Z" level=info msg="StartContainer for \"04e5fa41fe88a2d81adc61e722c50612980084a999a4e76c64ac23bc22128464\" returns successfully" Apr 21 10:00:25.895281 kubelet[2678]: I0421 10:00:25.895066 2678 scope.go:117] "RemoveContainer" containerID="9d2b6b5e5e429027c93a4ac6f57c0c9e65800fd0b655471f7d48f97e5c6d257b" Apr 21 10:00:25.898663 containerd[1603]: time="2026-04-21T10:00:25.898521954Z" level=info msg="CreateContainer within sandbox \"b5d53d71aed581630b6d012e81478390eb1e6ecd2daf6f16cd0e2dfb23cbb4dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:00:25.926438 containerd[1603]: time="2026-04-21T10:00:25.926386484Z" level=info msg="CreateContainer within sandbox \"b5d53d71aed581630b6d012e81478390eb1e6ecd2daf6f16cd0e2dfb23cbb4dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e79a99c60612b262626d6cf1b1a3151a4e958e3c4fd719476a16df1e8a5a6450\"" Apr 21 10:00:25.928310 containerd[1603]: time="2026-04-21T10:00:25.927126689Z" level=info msg="StartContainer for \"e79a99c60612b262626d6cf1b1a3151a4e958e3c4fd719476a16df1e8a5a6450\"" Apr 21 10:00:25.995454 containerd[1603]: time="2026-04-21T10:00:25.995374763Z" level=info msg="StartContainer for \"e79a99c60612b262626d6cf1b1a3151a4e958e3c4fd719476a16df1e8a5a6450\" returns successfully" Apr 21 10:00:26.854581 systemd[1]: run-containerd-runc-k8s.io-e79a99c60612b262626d6cf1b1a3151a4e958e3c4fd719476a16df1e8a5a6450-runc.6bXNla.mount: Deactivated successfully.