May 10 00:03:49.890611 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 10 00:03:49.890638 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 10 00:03:49.890649 kernel: KASLR enabled May 10 00:03:49.890656 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 10 00:03:49.890662 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 10 00:03:49.890668 kernel: random: crng init done May 10 00:03:49.890676 kernel: ACPI: Early table checksum verification disabled May 10 00:03:49.890683 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 10 00:03:49.890690 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 10 00:03:49.890698 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890704 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890711 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890717 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890724 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890732 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890741 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890748 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890755 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:03:49.890762 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 10 00:03:49.890769 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 10 00:03:49.890776 kernel: NUMA: Failed to initialise from firmware May 10 00:03:49.890783 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:49.890790 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 10 00:03:49.890797 kernel: Zone ranges: May 10 00:03:49.890805 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 10 00:03:49.890813 kernel: DMA32 empty May 10 00:03:49.890820 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 10 00:03:49.890827 kernel: Movable zone start for each node May 10 00:03:49.890834 kernel: Early memory node ranges May 10 00:03:49.890841 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 10 00:03:49.890848 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 10 00:03:49.890855 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 10 00:03:49.890862 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 10 00:03:49.890869 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 10 00:03:49.890876 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 10 00:03:49.890882 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 10 00:03:49.890890 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:03:49.890898 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 10 00:03:49.890906 kernel: psci: probing for conduit method from ACPI. May 10 00:03:49.890913 kernel: psci: PSCIv1.1 detected in firmware. May 10 00:03:49.890923 kernel: psci: Using standard PSCI v0.2 function IDs May 10 00:03:49.890931 kernel: psci: Trusted OS migration not required May 10 00:03:49.890938 kernel: psci: SMC Calling Convention v1.1 May 10 00:03:49.890947 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 10 00:03:49.890955 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 10 00:03:49.890962 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 10 00:03:49.890970 kernel: pcpu-alloc: [0] 0 [0] 1 May 10 00:03:49.890978 kernel: Detected PIPT I-cache on CPU0 May 10 00:03:49.890985 kernel: CPU features: detected: GIC system register CPU interface May 10 00:03:49.890992 kernel: CPU features: detected: Hardware dirty bit management May 10 00:03:49.891000 kernel: CPU features: detected: Spectre-v4 May 10 00:03:49.891007 kernel: CPU features: detected: Spectre-BHB May 10 00:03:49.891015 kernel: CPU features: kernel page table isolation forced ON by KASLR May 10 00:03:49.891024 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 10 00:03:49.891031 kernel: CPU features: detected: ARM erratum 1418040 May 10 00:03:49.891039 kernel: CPU features: detected: SSBS not fully self-synchronizing May 10 00:03:49.891047 kernel: alternatives: applying boot alternatives May 10 00:03:49.891056 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:49.891064 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:03:49.891071 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:03:49.891079 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:03:49.891086 kernel: Fallback order for Node 0: 0 May 10 00:03:49.891094 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 10 00:03:49.891101 kernel: Policy zone: Normal May 10 00:03:49.891110 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:03:49.891117 kernel: software IO TLB: area num 2. May 10 00:03:49.891183 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 10 00:03:49.891192 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) May 10 00:03:49.891199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:03:49.891207 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 00:03:49.891215 kernel: rcu: RCU event tracing is enabled. May 10 00:03:49.891223 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:03:49.891231 kernel: Trampoline variant of Tasks RCU enabled. May 10 00:03:49.891238 kernel: Tracing variant of Tasks RCU enabled. May 10 00:03:49.891246 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:03:49.891257 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:03:49.891264 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 10 00:03:49.891271 kernel: GICv3: 256 SPIs implemented May 10 00:03:49.891279 kernel: GICv3: 0 Extended SPIs implemented May 10 00:03:49.891286 kernel: Root IRQ handler: gic_handle_irq May 10 00:03:49.891293 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 10 00:03:49.891301 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 10 00:03:49.891308 kernel: ITS [mem 0x08080000-0x0809ffff] May 10 00:03:49.891316 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 10 00:03:49.891324 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 10 00:03:49.891331 kernel: GICv3: using LPI property table @0x00000001000e0000 May 10 00:03:49.891339 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 10 00:03:49.891360 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 00:03:49.891368 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:49.891375 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 10 00:03:49.891383 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 10 00:03:49.891390 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 10 00:03:49.891416 kernel: Console: colour dummy device 80x25 May 10 00:03:49.891426 kernel: ACPI: Core revision 20230628 May 10 00:03:49.891434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 10 00:03:49.891442 kernel: pid_max: default: 32768 minimum: 301 May 10 00:03:49.891450 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 00:03:49.891461 kernel: landlock: Up and running. May 10 00:03:49.891468 kernel: SELinux: Initializing. May 10 00:03:49.891476 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:49.891485 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:03:49.891493 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 10 00:03:49.891501 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:49.891509 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:03:49.891517 kernel: rcu: Hierarchical SRCU implementation. May 10 00:03:49.891524 kernel: rcu: Max phase no-delay instances is 400. May 10 00:03:49.891533 kernel: Platform MSI: ITS@0x8080000 domain created May 10 00:03:49.891541 kernel: PCI/MSI: ITS@0x8080000 domain created May 10 00:03:49.891549 kernel: Remapping and enabling EFI services. May 10 00:03:49.891556 kernel: smp: Bringing up secondary CPUs ... May 10 00:03:49.891564 kernel: Detected PIPT I-cache on CPU1 May 10 00:03:49.891572 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 10 00:03:49.891579 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 10 00:03:49.891587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:03:49.891595 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 10 00:03:49.891604 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:03:49.891612 kernel: SMP: Total of 2 processors activated. May 10 00:03:49.891620 kernel: CPU features: detected: 32-bit EL0 Support May 10 00:03:49.891634 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 10 00:03:49.891643 kernel: CPU features: detected: Common not Private translations May 10 00:03:49.891651 kernel: CPU features: detected: CRC32 instructions May 10 00:03:49.891659 kernel: CPU features: detected: Enhanced Virtualization Traps May 10 00:03:49.891668 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 10 00:03:49.891676 kernel: CPU features: detected: LSE atomic instructions May 10 00:03:49.891684 kernel: CPU features: detected: Privileged Access Never May 10 00:03:49.891692 kernel: CPU features: detected: RAS Extension Support May 10 00:03:49.891702 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 10 00:03:49.891710 kernel: CPU: All CPU(s) started at EL1 May 10 00:03:49.891718 kernel: alternatives: applying system-wide alternatives May 10 00:03:49.891726 kernel: devtmpfs: initialized May 10 00:03:49.891734 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:03:49.891743 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:03:49.891752 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:03:49.891760 kernel: SMBIOS 3.0.0 present. May 10 00:03:49.891769 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 10 00:03:49.891777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:03:49.891785 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 10 00:03:49.891793 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 10 00:03:49.891801 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 10 00:03:49.891809 kernel: audit: initializing netlink subsys (disabled) May 10 00:03:49.891817 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 May 10 00:03:49.891827 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:03:49.891835 kernel: cpuidle: using governor menu May 10 00:03:49.891843 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 10 00:03:49.891851 kernel: ASID allocator initialised with 32768 entries May 10 00:03:49.891859 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:03:49.891867 kernel: Serial: AMBA PL011 UART driver May 10 00:03:49.891875 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 10 00:03:49.891884 kernel: Modules: 0 pages in range for non-PLT usage May 10 00:03:49.891892 kernel: Modules: 509008 pages in range for PLT usage May 10 00:03:49.891901 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:03:49.891909 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 10 00:03:49.891917 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 10 00:03:49.891926 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 10 00:03:49.891934 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:03:49.891942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 10 00:03:49.891950 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 10 00:03:49.891958 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 10 00:03:49.891967 kernel: ACPI: Added _OSI(Module Device) May 10 00:03:49.891984 kernel: ACPI: Added _OSI(Processor Device) May 10 00:03:49.891992 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:03:49.892001 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:03:49.892009 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:03:49.892017 kernel: ACPI: Interpreter enabled May 10 00:03:49.892024 kernel: ACPI: Using GIC for interrupt routing May 10 00:03:49.892032 kernel: ACPI: MCFG table detected, 1 entries May 10 00:03:49.892040 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 10 00:03:49.892047 kernel: printk: console [ttyAMA0] enabled May 10 00:03:49.892076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:03:49.892242 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:03:49.892322 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:03:49.892388 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:03:49.894658 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 10 00:03:49.894753 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 10 00:03:49.894764 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 10 00:03:49.894778 kernel: PCI host bridge to bus 0000:00 May 10 00:03:49.894855 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 10 00:03:49.894917 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 10 00:03:49.894977 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 10 00:03:49.895035 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:03:49.895128 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 10 00:03:49.895218 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 10 00:03:49.895291 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 10 00:03:49.895358 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:49.896526 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.896620 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 10 00:03:49.896697 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.896765 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 10 00:03:49.896848 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.896920 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 10 00:03:49.896992 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.897555 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 10 00:03:49.897646 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.897713 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 10 00:03:49.897794 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.897861 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 10 00:03:49.897934 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.898000 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 10 00:03:49.898077 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.898178 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 10 00:03:49.899112 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 10 00:03:49.899260 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 10 00:03:49.899340 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 10 00:03:49.900509 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 10 00:03:49.900625 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:49.900695 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 10 00:03:49.900769 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:49.900841 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:49.900924 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:03:49.900994 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 10 00:03:49.901069 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 10 00:03:49.901159 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 10 00:03:49.901230 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 10 00:03:49.901310 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 10 00:03:49.901378 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 10 00:03:49.901478 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:03:49.901550 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 10 00:03:49.901617 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 10 00:03:49.901694 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 10 00:03:49.901762 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 10 00:03:49.901835 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:49.901917 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:03:49.901984 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 10 00:03:49.902051 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 10 00:03:49.902156 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:03:49.902242 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 10 00:03:49.902316 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:49.902381 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 10 00:03:49.904806 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 10 00:03:49.904887 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 10 00:03:49.904954 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 10 00:03:49.905024 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:03:49.905092 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:49.905219 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 10 00:03:49.905294 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:03:49.905362 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 10 00:03:49.905444 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 10 00:03:49.905517 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:03:49.905584 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:49.905649 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 10 00:03:49.905744 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:03:49.905823 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:49.905890 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 10 00:03:49.905959 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:03:49.906025 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:49.906090 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 10 00:03:49.906172 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:03:49.906240 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:49.906308 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 10 00:03:49.906377 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:03:49.907278 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:49.907359 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 10 00:03:49.907504 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 10 00:03:49.907594 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.907667 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 10 00:03:49.907740 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.907806 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 10 00:03:49.907870 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.907937 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 10 00:03:49.908003 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.908073 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 10 00:03:49.908151 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.908223 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 10 00:03:49.908288 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.908354 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 10 00:03:49.908433 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.908502 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 10 00:03:49.908568 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.908634 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 10 00:03:49.908705 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.908775 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 10 00:03:49.908841 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 10 00:03:49.908906 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 10 00:03:49.908970 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:03:49.909035 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 10 00:03:49.909099 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:03:49.909179 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 10 00:03:49.909245 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:03:49.909311 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 10 00:03:49.909387 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:03:49.909479 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 10 00:03:49.909547 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:03:49.909614 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 10 00:03:49.909679 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:03:49.909749 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 10 00:03:49.909814 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:03:49.909879 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 10 00:03:49.909945 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:03:49.910009 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 10 00:03:49.910074 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 10 00:03:49.910184 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 10 00:03:49.910262 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 10 00:03:49.910337 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:03:49.910454 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 10 00:03:49.910524 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 10 00:03:49.910588 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:03:49.910653 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 10 00:03:49.910737 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.910812 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 10 00:03:49.910885 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 10 00:03:49.910950 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:03:49.911014 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 10 00:03:49.911082 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.911169 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:03:49.911245 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 10 00:03:49.911313 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 10 00:03:49.911378 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:03:49.911476 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 10 00:03:49.911544 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.911617 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:03:49.911685 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 10 00:03:49.911754 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:03:49.911824 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 10 00:03:49.911889 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.911960 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 10 00:03:49.912028 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 10 00:03:49.912094 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 10 00:03:49.912171 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:03:49.912237 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 10 00:03:49.912301 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.912378 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 10 00:03:49.912507 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 10 00:03:49.912574 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 10 00:03:49.912638 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:03:49.912701 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 10 00:03:49.912763 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.912835 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 10 00:03:49.912900 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 10 00:03:49.912971 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 10 00:03:49.913037 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 10 00:03:49.913101 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:03:49.913177 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 10 00:03:49.913243 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.913308 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 10 00:03:49.913373 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:03:49.913462 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 10 00:03:49.913532 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.913598 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 10 00:03:49.913663 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 10 00:03:49.913728 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 10 00:03:49.913792 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.913861 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 10 00:03:49.913920 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 10 00:03:49.913979 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 10 00:03:49.914059 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:03:49.914163 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 10 00:03:49.914555 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:03:49.914672 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 10 00:03:49.914745 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 10 00:03:49.914817 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:03:49.914896 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 10 00:03:49.914958 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 10 00:03:49.915594 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:03:49.915681 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 10 00:03:49.915749 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 10 00:03:49.915808 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:03:49.915875 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 10 00:03:49.915938 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 10 00:03:49.915997 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:03:49.916063 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 10 00:03:49.916166 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 10 00:03:49.916244 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:03:49.916315 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 10 00:03:49.916388 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 10 00:03:49.916466 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:03:49.916544 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 10 00:03:49.916605 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 10 00:03:49.916665 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:03:49.916744 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 10 00:03:49.916807 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 10 00:03:49.916870 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:03:49.916880 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 10 00:03:49.916888 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 10 00:03:49.916897 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 10 00:03:49.916904 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 10 00:03:49.916914 kernel: iommu: Default domain type: Translated May 10 00:03:49.916922 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 10 00:03:49.916930 kernel: efivars: Registered efivars operations May 10 00:03:49.916938 kernel: vgaarb: loaded May 10 00:03:49.916945 kernel: clocksource: Switched to clocksource arch_sys_counter May 10 00:03:49.916953 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:03:49.916961 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:03:49.916969 kernel: pnp: PnP ACPI init May 10 00:03:49.917042 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 10 00:03:49.917056 kernel: pnp: PnP ACPI: found 1 devices May 10 00:03:49.917064 kernel: NET: Registered PF_INET protocol family May 10 00:03:49.917071 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:03:49.917079 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:03:49.917087 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:03:49.917095 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:03:49.917102 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 00:03:49.917110 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:03:49.917126 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:49.917138 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:03:49.917146 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:03:49.917226 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 10 00:03:49.917237 kernel: PCI: CLS 0 bytes, default 64 May 10 00:03:49.917245 kernel: kvm [1]: HYP mode not available May 10 00:03:49.917253 kernel: Initialise system trusted keyrings May 10 00:03:49.917261 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:03:49.917269 kernel: Key type asymmetric registered May 10 00:03:49.917276 kernel: Asymmetric key parser 'x509' registered May 10 00:03:49.917287 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 00:03:49.917295 kernel: io scheduler mq-deadline registered May 10 00:03:49.917303 kernel: io scheduler kyber registered May 10 00:03:49.917310 kernel: io scheduler bfq registered May 10 00:03:49.917319 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 10 00:03:49.917387 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 10 00:03:49.920560 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 10 00:03:49.920647 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.920732 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 10 00:03:49.920801 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 10 00:03:49.920869 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.920938 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 10 00:03:49.921006 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 10 00:03:49.921073 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921289 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 10 00:03:49.921373 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 10 00:03:49.921461 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921535 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 10 00:03:49.921606 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 10 00:03:49.921681 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921753 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 10 00:03:49.921821 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 10 00:03:49.921887 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.921956 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 10 00:03:49.922024 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 10 00:03:49.922092 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.922185 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 10 00:03:49.922257 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 10 00:03:49.922325 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.922336 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 10 00:03:49.926013 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 10 00:03:49.926194 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 10 00:03:49.926280 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:03:49.926292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 10 00:03:49.926300 kernel: ACPI: button: Power Button [PWRB] May 10 00:03:49.926309 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 10 00:03:49.926383 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 10 00:03:49.926509 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 10 00:03:49.926523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:03:49.926531 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 10 00:03:49.926605 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 10 00:03:49.926616 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 10 00:03:49.926624 kernel: thunder_xcv, ver 1.0 May 10 00:03:49.926632 kernel: thunder_bgx, ver 1.0 May 10 00:03:49.926639 kernel: nicpf, ver 1.0 May 10 00:03:49.926647 kernel: nicvf, ver 1.0 May 10 00:03:49.926725 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 10 00:03:49.926787 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-10T00:03:49 UTC (1746835429) May 10 00:03:49.926800 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:03:49.926809 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 10 00:03:49.926817 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 10 00:03:49.926824 kernel: watchdog: Hard watchdog permanently disabled May 10 00:03:49.926832 kernel: NET: Registered PF_INET6 protocol family May 10 00:03:49.926840 kernel: Segment Routing with IPv6 May 10 00:03:49.926848 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:03:49.926855 kernel: NET: Registered PF_PACKET protocol family May 10 00:03:49.926863 kernel: Key type dns_resolver registered May 10 00:03:49.926872 kernel: registered taskstats version 1 May 10 00:03:49.926880 kernel: Loading compiled-in X.509 certificates May 10 00:03:49.926888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 10 00:03:49.926895 kernel: Key type .fscrypt registered May 10 00:03:49.926903 kernel: Key type fscrypt-provisioning registered May 10 00:03:49.926911 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:03:49.926919 kernel: ima: Allocated hash algorithm: sha1 May 10 00:03:49.926927 kernel: ima: No architecture policies found May 10 00:03:49.926934 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 10 00:03:49.926944 kernel: clk: Disabling unused clocks May 10 00:03:49.926951 kernel: Freeing unused kernel memory: 39424K May 10 00:03:49.926959 kernel: Run /init as init process May 10 00:03:49.926966 kernel: with arguments: May 10 00:03:49.926974 kernel: /init May 10 00:03:49.926982 kernel: with environment: May 10 00:03:49.926989 kernel: HOME=/ May 10 00:03:49.926997 kernel: TERM=linux May 10 00:03:49.927004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:03:49.927015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:49.927025 systemd[1]: Detected virtualization kvm. May 10 00:03:49.927034 systemd[1]: Detected architecture arm64. May 10 00:03:49.927042 systemd[1]: Running in initrd. May 10 00:03:49.927050 systemd[1]: No hostname configured, using default hostname. May 10 00:03:49.927058 systemd[1]: Hostname set to . May 10 00:03:49.927066 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:49.927077 systemd[1]: Queued start job for default target initrd.target. May 10 00:03:49.927085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:49.927093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:49.927103 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 00:03:49.927113 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:49.927134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 00:03:49.927143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 00:03:49.927155 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 00:03:49.927163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 00:03:49.927171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:49.927180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:49.927188 systemd[1]: Reached target paths.target - Path Units. May 10 00:03:49.927196 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:49.927205 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:49.927213 systemd[1]: Reached target timers.target - Timer Units. May 10 00:03:49.927222 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:49.927230 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:49.927239 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:03:49.927247 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:03:49.927255 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:49.927263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:49.927272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:49.927280 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:03:49.927288 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 00:03:49.927298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:49.927306 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 00:03:49.927315 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:03:49.927323 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:49.927331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:49.927361 systemd-journald[236]: Collecting audit messages is disabled. May 10 00:03:49.927384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:49.927392 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 00:03:49.927424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:49.927433 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:03:49.927444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:03:49.927455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:49.927465 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:03:49.927475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:49.927485 kernel: Bridge firewalling registered May 10 00:03:49.927493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:49.927504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:49.927513 systemd-journald[236]: Journal started May 10 00:03:49.927533 systemd-journald[236]: Runtime Journal (/run/log/journal/c9dff99e546c4021a9893fc7a0a3863b) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:49.895442 systemd-modules-load[237]: Inserted module 'overlay' May 10 00:03:49.929723 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:49.920623 systemd-modules-load[237]: Inserted module 'br_netfilter' May 10 00:03:49.934589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:49.935454 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:49.942675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:49.954222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:49.956810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:49.965700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:49.968185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:49.973711 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:49.975862 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 00:03:50.000789 dracut-cmdline[276]: dracut-dracut-053 May 10 00:03:50.002514 systemd-resolved[270]: Positive Trust Anchors: May 10 00:03:50.002531 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:50.002563 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:50.008415 systemd-resolved[270]: Defaulting to hostname 'linux'. May 10 00:03:50.009608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:50.011239 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:03:50.010874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:50.095466 kernel: SCSI subsystem initialized May 10 00:03:50.100451 kernel: Loading iSCSI transport class v2.0-870. May 10 00:03:50.108439 kernel: iscsi: registered transport (tcp) May 10 00:03:50.121444 kernel: iscsi: registered transport (qla4xxx) May 10 00:03:50.121519 kernel: QLogic iSCSI HBA Driver May 10 00:03:50.171459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 00:03:50.176633 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 00:03:50.196721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:03:50.196816 kernel: device-mapper: uevent: version 1.0.3 May 10 00:03:50.197455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 00:03:50.249479 kernel: raid6: neonx8 gen() 15667 MB/s May 10 00:03:50.266475 kernel: raid6: neonx4 gen() 15572 MB/s May 10 00:03:50.283448 kernel: raid6: neonx2 gen() 13171 MB/s May 10 00:03:50.300477 kernel: raid6: neonx1 gen() 10434 MB/s May 10 00:03:50.317446 kernel: raid6: int64x8 gen() 6925 MB/s May 10 00:03:50.334471 kernel: raid6: int64x4 gen() 7324 MB/s May 10 00:03:50.351480 kernel: raid6: int64x2 gen() 6106 MB/s May 10 00:03:50.368458 kernel: raid6: int64x1 gen() 5027 MB/s May 10 00:03:50.368512 kernel: raid6: using algorithm neonx8 gen() 15667 MB/s May 10 00:03:50.385475 kernel: raid6: .... xor() 11872 MB/s, rmw enabled May 10 00:03:50.385559 kernel: raid6: using neon recovery algorithm May 10 00:03:50.390655 kernel: xor: measuring software checksum speed May 10 00:03:50.390735 kernel: 8regs : 19769 MB/sec May 10 00:03:50.390750 kernel: 32regs : 19688 MB/sec May 10 00:03:50.391446 kernel: arm64_neon : 26892 MB/sec May 10 00:03:50.391488 kernel: xor: using function: arm64_neon (26892 MB/sec) May 10 00:03:50.441476 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 00:03:50.458650 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:50.470744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:50.485820 systemd-udevd[457]: Using default interface naming scheme 'v255'. May 10 00:03:50.489442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:50.497730 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 00:03:50.515148 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 10 00:03:50.551033 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:50.561632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:50.611098 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:50.619871 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 00:03:50.642447 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 00:03:50.643929 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:50.645661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:50.646878 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:50.655666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 00:03:50.670437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:50.707095 kernel: scsi host0: Virtio SCSI HBA May 10 00:03:50.719449 kernel: ACPI: bus type USB registered May 10 00:03:50.719509 kernel: usbcore: registered new interface driver usbfs May 10 00:03:50.719521 kernel: usbcore: registered new interface driver hub May 10 00:03:50.719531 kernel: usbcore: registered new device driver usb May 10 00:03:50.722431 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:03:50.722522 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 10 00:03:50.735820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:50.735958 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:50.740641 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:50.742542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:50.742765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:50.746832 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:50.757724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:50.768430 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:50.770416 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 10 00:03:50.770595 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:03:50.774544 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:03:50.774729 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 10 00:03:50.774819 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 10 00:03:50.775641 kernel: hub 1-0:1.0: USB hub found May 10 00:03:50.778616 kernel: hub 1-0:1.0: 4 ports detected May 10 00:03:50.778814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:50.783541 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:03:50.783703 kernel: hub 2-0:1.0: USB hub found May 10 00:03:50.783800 kernel: hub 2-0:1.0: 4 ports detected May 10 00:03:50.787668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:03:50.789760 kernel: sr 0:0:0:0: Power-on or device reset occurred May 10 00:03:50.793973 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 10 00:03:50.794170 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:03:50.794183 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 10 00:03:50.802501 kernel: sd 0:0:0:1: Power-on or device reset occurred May 10 00:03:50.803419 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 10 00:03:50.804572 kernel: sd 0:0:0:1: [sda] Write Protect is off May 10 00:03:50.804746 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 10 00:03:50.804834 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:03:50.808679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:03:50.808728 kernel: GPT:17805311 != 80003071 May 10 00:03:50.808738 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:03:50.809530 kernel: GPT:17805311 != 80003071 May 10 00:03:50.810416 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:03:50.810460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.812599 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 10 00:03:50.829443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:50.861434 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (513) May 10 00:03:50.861489 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (512) May 10 00:03:50.864751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 10 00:03:50.872606 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 10 00:03:50.880146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:03:50.886366 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 10 00:03:50.887088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 10 00:03:50.897607 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 00:03:50.913757 disk-uuid[576]: Primary Header is updated. May 10 00:03:50.913757 disk-uuid[576]: Secondary Entries is updated. May 10 00:03:50.913757 disk-uuid[576]: Secondary Header is updated. May 10 00:03:50.920429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.925478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:50.931442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:51.025449 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:03:51.161667 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 10 00:03:51.161757 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 10 00:03:51.162087 kernel: usbcore: registered new interface driver usbhid May 10 00:03:51.162140 kernel: usbhid: USB HID core driver May 10 00:03:51.268499 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 10 00:03:51.398469 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 10 00:03:51.452454 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 10 00:03:51.933012 disk-uuid[577]: The operation has completed successfully. May 10 00:03:51.933811 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:03:51.981517 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:03:51.981614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 00:03:51.997650 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 00:03:52.006743 sh[594]: Success May 10 00:03:52.022483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 10 00:03:52.073733 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 00:03:52.075661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 00:03:52.077898 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 00:03:52.100513 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 10 00:03:52.100602 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.101476 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 00:03:52.101523 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 00:03:52.102424 kernel: BTRFS info (device dm-0): using free space tree May 10 00:03:52.109537 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 10 00:03:52.111993 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 00:03:52.115445 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 00:03:52.121665 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 00:03:52.124838 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 00:03:52.140032 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.140095 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.140793 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:52.146438 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:52.146566 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:52.156026 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:03:52.157002 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.163293 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 00:03:52.171631 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 00:03:52.255877 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:52.262638 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:52.263743 ignition[692]: Ignition 2.19.0 May 10 00:03:52.263751 ignition[692]: Stage: fetch-offline May 10 00:03:52.263795 ignition[692]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.263804 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.263973 ignition[692]: parsed url from cmdline: "" May 10 00:03:52.263976 ignition[692]: no config URL provided May 10 00:03:52.263981 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:52.263988 ignition[692]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:52.267541 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:52.263993 ignition[692]: failed to fetch config: resource requires networking May 10 00:03:52.264217 ignition[692]: Ignition finished successfully May 10 00:03:52.289170 systemd-networkd[782]: lo: Link UP May 10 00:03:52.289184 systemd-networkd[782]: lo: Gained carrier May 10 00:03:52.291758 systemd-networkd[782]: Enumeration completed May 10 00:03:52.291886 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:52.293197 systemd[1]: Reached target network.target - Network. May 10 00:03:52.294333 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.294337 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:52.296042 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.296045 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:52.297740 systemd-networkd[782]: eth0: Link UP May 10 00:03:52.297744 systemd-networkd[782]: eth0: Gained carrier May 10 00:03:52.297751 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.298631 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 00:03:52.301718 systemd-networkd[782]: eth1: Link UP May 10 00:03:52.301721 systemd-networkd[782]: eth1: Gained carrier May 10 00:03:52.301730 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:52.321854 ignition[785]: Ignition 2.19.0 May 10 00:03:52.321872 ignition[785]: Stage: fetch May 10 00:03:52.322085 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.322095 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.322238 ignition[785]: parsed url from cmdline: "" May 10 00:03:52.322242 ignition[785]: no config URL provided May 10 00:03:52.322247 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:03:52.322256 ignition[785]: no config at "/usr/lib/ignition/user.ign" May 10 00:03:52.322277 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 10 00:03:52.323916 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 10 00:03:52.334525 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:03:52.362531 systemd-networkd[782]: eth0: DHCPv4 address 138.199.169.250/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:03:52.524218 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 10 00:03:52.531358 ignition[785]: GET result: OK May 10 00:03:52.531467 ignition[785]: parsing config with SHA512: d465e565a72d13422cee94512084c9307fdb6573192e149ec75e657549d1d82cfa0db3d65ffa2a0a25f168bcb392d7fe061cb0c51b665801aa8b666903d8a27c May 10 00:03:52.538918 unknown[785]: fetched base config from "system" May 10 00:03:52.538929 unknown[785]: fetched base config from "system" May 10 00:03:52.539433 ignition[785]: fetch: fetch complete May 10 00:03:52.538934 unknown[785]: fetched user config from "hetzner" May 10 00:03:52.539439 ignition[785]: fetch: fetch passed May 10 00:03:52.539494 ignition[785]: Ignition finished successfully May 10 00:03:52.543443 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 00:03:52.547653 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 00:03:52.565768 ignition[792]: Ignition 2.19.0 May 10 00:03:52.565783 ignition[792]: Stage: kargs May 10 00:03:52.565963 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.565973 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.566997 ignition[792]: kargs: kargs passed May 10 00:03:52.569219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 00:03:52.567050 ignition[792]: Ignition finished successfully May 10 00:03:52.576718 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 00:03:52.588222 ignition[799]: Ignition 2.19.0 May 10 00:03:52.588231 ignition[799]: Stage: disks May 10 00:03:52.588428 ignition[799]: no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.588438 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.591603 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 00:03:52.589390 ignition[799]: disks: disks passed May 10 00:03:52.589463 ignition[799]: Ignition finished successfully May 10 00:03:52.593921 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 00:03:52.596996 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:03:52.598786 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:52.600039 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:03:52.601124 systemd[1]: Reached target basic.target - Basic System. May 10 00:03:52.607689 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 00:03:52.625536 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 10 00:03:52.629529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 00:03:52.634603 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 00:03:52.680427 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 10 00:03:52.681875 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 00:03:52.684442 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 00:03:52.692561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:52.696592 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 00:03:52.699272 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 10 00:03:52.702540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:03:52.702583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:52.704352 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 00:03:52.711429 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) May 10 00:03:52.713375 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.713447 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:52.713461 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:52.713634 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 00:03:52.722537 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:52.722597 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:52.728830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:52.765073 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:03:52.769636 coreos-metadata[817]: May 10 00:03:52.769 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 10 00:03:52.770911 coreos-metadata[817]: May 10 00:03:52.770 INFO Fetch successful May 10 00:03:52.772566 coreos-metadata[817]: May 10 00:03:52.771 INFO wrote hostname ci-4081-3-3-n-025f904aa2 to /sysroot/etc/hostname May 10 00:03:52.776457 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:52.779554 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 10 00:03:52.784268 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:03:52.790001 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:03:52.891714 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 00:03:52.897651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 00:03:52.902390 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 00:03:52.910481 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:52.936510 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 00:03:52.937050 ignition[932]: INFO : Ignition 2.19.0 May 10 00:03:52.937050 ignition[932]: INFO : Stage: mount May 10 00:03:52.937050 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:52.937050 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:52.940222 ignition[932]: INFO : mount: mount passed May 10 00:03:52.940222 ignition[932]: INFO : Ignition finished successfully May 10 00:03:52.939997 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 00:03:52.944665 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 00:03:53.101486 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 00:03:53.120833 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:03:53.133441 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) May 10 00:03:53.134839 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:03:53.134906 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:03:53.134928 kernel: BTRFS info (device sda6): using free space tree May 10 00:03:53.138443 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:03:53.138510 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:03:53.142370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:03:53.168227 ignition[960]: INFO : Ignition 2.19.0 May 10 00:03:53.168227 ignition[960]: INFO : Stage: files May 10 00:03:53.169397 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:53.169397 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:53.171179 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 10 00:03:53.171997 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:03:53.171997 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:03:53.175351 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:03:53.176478 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:03:53.176478 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:03:53.175752 unknown[960]: wrote ssh authorized keys file for user: core May 10 00:03:53.179729 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:53.179729 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 10 00:03:53.268392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:03:53.652729 systemd-networkd[782]: eth1: Gained IPv6LL May 10 00:03:53.716710 systemd-networkd[782]: eth0: Gained IPv6LL May 10 00:03:53.858511 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:03:53.860769 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:03:53.860769 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 10 00:03:54.505936 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:03:54.885720 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:03:54.886748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:54.893298 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 10 00:03:55.480155 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 00:03:55.683355 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 10 00:03:55.683355 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 10 00:03:55.686311 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:55.687394 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:03:55.687394 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 10 00:03:55.687394 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:03:55.687394 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:55.687394 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:03:55.687394 ignition[960]: INFO : files: files passed May 10 00:03:55.687394 ignition[960]: INFO : Ignition finished successfully May 10 00:03:55.690932 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 00:03:55.700996 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 00:03:55.704348 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 00:03:55.706713 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:03:55.708656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 00:03:55.718076 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:55.718076 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:55.720543 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:03:55.724475 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:55.725354 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 00:03:55.732951 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 00:03:55.762524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:03:55.762657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 00:03:55.764252 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 00:03:55.765636 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 00:03:55.767268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 00:03:55.768599 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 00:03:55.787803 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:55.794680 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 00:03:55.808137 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:55.809519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:55.810215 systemd[1]: Stopped target timers.target - Timer Units. May 10 00:03:55.811197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:03:55.811316 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:03:55.812596 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 00:03:55.813947 systemd[1]: Stopped target basic.target - Basic System. May 10 00:03:55.814928 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 00:03:55.815854 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:03:55.816812 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 00:03:55.817863 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 00:03:55.818892 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:03:55.820010 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 00:03:55.821200 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 00:03:55.822103 systemd[1]: Stopped target swap.target - Swaps. May 10 00:03:55.822904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:03:55.823019 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 00:03:55.824279 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:55.824940 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:55.826084 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 00:03:55.829514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:55.830320 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:03:55.830471 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 00:03:55.832535 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:03:55.832671 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:03:55.834896 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:03:55.835003 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 00:03:55.836302 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 10 00:03:55.836423 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:03:55.847782 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 00:03:55.852672 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 00:03:55.853447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:03:55.853665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:55.856321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:03:55.856550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:03:55.867865 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:03:55.870440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 00:03:55.872823 ignition[1012]: INFO : Ignition 2.19.0 May 10 00:03:55.872823 ignition[1012]: INFO : Stage: umount May 10 00:03:55.872823 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:03:55.872823 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:03:55.875845 ignition[1012]: INFO : umount: umount passed May 10 00:03:55.875845 ignition[1012]: INFO : Ignition finished successfully May 10 00:03:55.878282 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:03:55.878433 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 00:03:55.880528 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:03:55.881006 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:03:55.881048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 00:03:55.885999 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:03:55.886159 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 00:03:55.887345 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:03:55.887396 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 00:03:55.888417 systemd[1]: Stopped target network.target - Network. May 10 00:03:55.889573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:03:55.889661 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:03:55.891447 systemd[1]: Stopped target paths.target - Path Units. May 10 00:03:55.893030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:03:55.900484 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:55.901347 systemd[1]: Stopped target slices.target - Slice Units. May 10 00:03:55.902925 systemd[1]: Stopped target sockets.target - Socket Units. May 10 00:03:55.904281 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:03:55.904330 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:03:55.905315 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:03:55.905360 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:03:55.906558 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:03:55.906612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 00:03:55.907731 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 00:03:55.907779 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 00:03:55.908844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 00:03:55.910810 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 00:03:55.914464 systemd-networkd[782]: eth0: DHCPv6 lease lost May 10 00:03:55.920517 systemd-networkd[782]: eth1: DHCPv6 lease lost May 10 00:03:55.921686 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:03:55.921870 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 00:03:55.927847 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:03:55.927985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 00:03:55.931182 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:03:55.931242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:55.937619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 00:03:55.938136 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:03:55.938196 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:03:55.940386 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:03:55.940462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:55.941019 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:03:55.941056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 00:03:55.941705 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 00:03:55.941743 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:55.942627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:55.946383 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:03:55.946500 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 00:03:55.956180 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:03:55.956278 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 00:03:55.963011 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:03:55.963167 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 00:03:55.967971 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:03:55.968135 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:55.969535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:03:55.969578 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 00:03:55.970650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:03:55.970684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:55.971572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:03:55.971620 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 00:03:55.973099 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:03:55.973144 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 00:03:55.974644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:03:55.974688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:03:55.998296 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 00:03:55.999889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:03:56.000002 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:56.001732 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 10 00:03:56.001819 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:56.005530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:03:56.005582 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:56.006614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:56.006656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:56.011352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:03:56.011510 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 00:03:56.012601 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 00:03:56.023647 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 00:03:56.033343 systemd[1]: Switching root. May 10 00:03:56.063939 systemd-journald[236]: Journal stopped May 10 00:03:56.950813 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 10 00:03:56.950892 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:03:56.950917 kernel: SELinux: policy capability open_perms=1 May 10 00:03:56.950935 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:03:56.950945 kernel: SELinux: policy capability always_check_network=0 May 10 00:03:56.950954 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:03:56.950964 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:03:56.950973 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:03:56.950983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:03:56.950993 kernel: audit: type=1403 audit(1746835436.184:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:03:56.951004 systemd[1]: Successfully loaded SELinux policy in 34.370ms. May 10 00:03:56.951029 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.001ms. May 10 00:03:56.951041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:03:56.951052 systemd[1]: Detected virtualization kvm. May 10 00:03:56.951064 systemd[1]: Detected architecture arm64. May 10 00:03:56.951074 systemd[1]: Detected first boot. May 10 00:03:56.951123 systemd[1]: Hostname set to . May 10 00:03:56.951136 systemd[1]: Initializing machine ID from VM UUID. May 10 00:03:56.951146 zram_generator::config[1054]: No configuration found. May 10 00:03:56.951161 systemd[1]: Populated /etc with preset unit settings. May 10 00:03:56.951172 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:03:56.951182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 00:03:56.951197 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:03:56.951208 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:03:56.951218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:03:56.951228 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:03:56.951239 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:03:56.951249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:03:56.951261 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:03:56.951271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:03:56.951286 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:03:56.951297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:03:56.951308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:03:56.951318 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:03:56.951329 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:03:56.951339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:03:56.951351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:03:56.951363 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:03:56.951374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:03:56.951384 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 00:03:56.951395 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 00:03:56.951419 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 00:03:56.951432 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:03:56.951445 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:03:56.951455 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:03:56.951466 systemd[1]: Reached target slices.target - Slice Units. May 10 00:03:56.951476 systemd[1]: Reached target swap.target - Swaps. May 10 00:03:56.951487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:03:56.951497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:03:56.951507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:03:56.951518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:03:56.951528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:03:56.951539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:03:56.951550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:03:56.951560 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:03:56.951578 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:03:56.951590 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:03:56.951601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:03:56.951613 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:03:56.951624 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:03:56.951634 systemd[1]: Reached target machines.target - Containers. May 10 00:03:56.951645 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:03:56.951655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:56.951666 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:03:56.951677 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:03:56.951687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:56.951699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:56.951709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:56.951720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:03:56.951731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:56.951742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:03:56.951753 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:03:56.951763 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 00:03:56.951773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:03:56.951785 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:03:56.951795 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:03:56.951805 kernel: ACPI: bus type drm_connector registered May 10 00:03:56.951820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:03:56.951830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:03:56.951841 kernel: fuse: init (API version 7.39) May 10 00:03:56.951850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:03:56.951861 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:03:56.951872 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:03:56.951882 systemd[1]: Stopped verity-setup.service. May 10 00:03:56.951893 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:03:56.951905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:03:56.951916 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:03:56.951926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:03:56.951939 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:03:56.951950 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:03:56.951961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:03:56.951971 kernel: loop: module loaded May 10 00:03:56.951981 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:03:56.951992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:03:56.952002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:56.952013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:56.952024 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:56.952036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:56.952048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:56.952059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:56.952069 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:03:56.952079 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:03:56.952101 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:56.952114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:56.952125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:03:56.952136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:03:56.952146 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:03:56.952188 systemd-journald[1124]: Collecting audit messages is disabled. May 10 00:03:56.952219 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:03:56.952234 systemd-journald[1124]: Journal started May 10 00:03:56.952259 systemd-journald[1124]: Runtime Journal (/run/log/journal/c9dff99e546c4021a9893fc7a0a3863b) is 8.0M, max 76.6M, 68.6M free. May 10 00:03:56.677126 systemd[1]: Queued start job for default target multi-user.target. May 10 00:03:56.694972 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 10 00:03:56.695589 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:03:56.956711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:03:56.958502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:56.970432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:03:56.982538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:03:56.984436 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:03:56.986344 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:03:56.993234 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:03:56.994743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:03:56.996742 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:03:56.999447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:03:57.014968 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:03:57.015712 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:03:57.017685 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:03:57.031630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:03:57.036048 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:03:57.036159 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. May 10 00:03:57.036169 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. May 10 00:03:57.037257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:57.041635 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:03:57.045712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:03:57.046466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:57.050784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:03:57.054602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:03:57.057574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:03:57.058665 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:03:57.063602 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:03:57.087495 systemd-journald[1124]: Time spent on flushing to /var/log/journal/c9dff99e546c4021a9893fc7a0a3863b is 44.142ms for 1139 entries. May 10 00:03:57.087495 systemd-journald[1124]: System Journal (/var/log/journal/c9dff99e546c4021a9893fc7a0a3863b) is 8.0M, max 584.8M, 576.8M free. May 10 00:03:57.148254 systemd-journald[1124]: Received client request to flush runtime journal. May 10 00:03:57.148315 kernel: loop0: detected capacity change from 0 to 114432 May 10 00:03:57.148338 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:03:57.110805 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:03:57.113870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:03:57.131259 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:03:57.133474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:03:57.145160 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:03:57.155845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:03:57.168534 kernel: loop1: detected capacity change from 0 to 114328 May 10 00:03:57.169312 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:03:57.179391 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:03:57.193237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:03:57.194672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:03:57.195312 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:03:57.211539 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 10 00:03:57.211553 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 10 00:03:57.214465 kernel: loop2: detected capacity change from 0 to 8 May 10 00:03:57.220378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:03:57.241432 kernel: loop3: detected capacity change from 0 to 189592 May 10 00:03:57.291860 kernel: loop4: detected capacity change from 0 to 114432 May 10 00:03:57.307436 kernel: loop5: detected capacity change from 0 to 114328 May 10 00:03:57.321753 kernel: loop6: detected capacity change from 0 to 8 May 10 00:03:57.326452 kernel: loop7: detected capacity change from 0 to 189592 May 10 00:03:57.353142 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 10 00:03:57.353944 (sd-merge)[1196]: Merged extensions into '/usr'. May 10 00:03:57.363948 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:03:57.363964 systemd[1]: Reloading... May 10 00:03:57.502429 zram_generator::config[1225]: No configuration found. May 10 00:03:57.578986 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:03:57.625802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:57.672288 systemd[1]: Reloading finished in 307 ms. May 10 00:03:57.718337 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:03:57.722378 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:03:57.730558 systemd[1]: Starting ensure-sysext.service... May 10 00:03:57.732240 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:03:57.742062 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... May 10 00:03:57.742227 systemd[1]: Reloading... May 10 00:03:57.780937 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:03:57.781283 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:03:57.784170 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:03:57.784464 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 10 00:03:57.784515 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 10 00:03:57.789596 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:57.789611 systemd-tmpfiles[1260]: Skipping /boot May 10 00:03:57.803234 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:03:57.803249 systemd-tmpfiles[1260]: Skipping /boot May 10 00:03:57.832426 zram_generator::config[1286]: No configuration found. May 10 00:03:57.940128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:03:57.991996 systemd[1]: Reloading finished in 249 ms. May 10 00:03:58.009554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:03:58.017217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:03:58.031845 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:03:58.035720 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:03:58.038738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:03:58.049611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:03:58.056172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:03:58.061686 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:03:58.066007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.075230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:58.078734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:58.082359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:58.083118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.085455 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:03:58.086704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:58.086885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:58.093564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.102555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:58.103771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.110356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.114409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:03:58.115117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.117755 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:03:58.126458 systemd[1]: Finished ensure-sysext.service. May 10 00:03:58.135751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 00:03:58.138779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:03:58.142567 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:03:58.154593 augenrules[1358]: No rules May 10 00:03:58.165444 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:03:58.166888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:58.168478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:58.171743 systemd-udevd[1336]: Using default interface naming scheme 'v255'. May 10 00:03:58.173482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:58.173643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:58.174825 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:58.174964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:58.175760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:58.175827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:58.176723 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:03:58.177195 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:03:58.189281 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:03:58.196434 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:03:58.199592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:03:58.201750 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:03:58.213817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:03:58.223603 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:03:58.297484 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 00:03:58.298288 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:03:58.327811 systemd-resolved[1330]: Positive Trust Anchors: May 10 00:03:58.328146 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:03:58.328245 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:03:58.332967 systemd-resolved[1330]: Using system hostname 'ci-4081-3-3-n-025f904aa2'. May 10 00:03:58.334633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:03:58.335592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:03:58.337539 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 10 00:03:58.341647 systemd-networkd[1377]: lo: Link UP May 10 00:03:58.341656 systemd-networkd[1377]: lo: Gained carrier May 10 00:03:58.342366 systemd-networkd[1377]: Enumeration completed May 10 00:03:58.342464 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:03:58.343108 systemd[1]: Reached target network.target - Network. May 10 00:03:58.357093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:03:58.424458 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:03:58.432417 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.432427 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:58.433747 systemd-networkd[1377]: eth0: Link UP May 10 00:03:58.433760 systemd-networkd[1377]: eth0: Gained carrier May 10 00:03:58.433779 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.441869 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.441882 systemd-networkd[1377]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:03:58.442439 systemd-networkd[1377]: eth1: Link UP May 10 00:03:58.442445 systemd-networkd[1377]: eth1: Gained carrier May 10 00:03:58.442459 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:03:58.479508 systemd-networkd[1377]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:03:58.481203 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. May 10 00:03:58.484536 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1389) May 10 00:03:58.496024 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 10 00:03:58.496177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:03:58.500566 systemd-networkd[1377]: eth0: DHCPv4 address 138.199.169.250/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:03:58.501274 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. May 10 00:03:58.501581 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. May 10 00:03:58.505772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:03:58.514812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:03:58.522589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:03:58.524135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:03:58.524174 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:03:58.524848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:03:58.525676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:03:58.539688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:03:58.539871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:03:58.541066 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:03:58.541613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:03:58.582372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:03:58.587442 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 10 00:03:58.587506 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 00:03:58.587541 kernel: [drm] features: -context_init May 10 00:03:58.588441 kernel: [drm] number of scanouts: 1 May 10 00:03:58.588515 kernel: [drm] number of cap sets: 0 May 10 00:03:58.590454 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 10 00:03:58.596107 kernel: Console: switching to colour frame buffer device 160x50 May 10 00:03:58.599710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:03:58.603669 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 00:03:58.604068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:03:58.604287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:03:58.609049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:58.617253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:03:58.617494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:58.632780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:03:58.634537 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:03:58.699104 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:03:58.740389 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:03:58.749770 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:03:58.765008 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:03:58.796561 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:03:58.798320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:03:58.799069 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:03:58.799947 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:03:58.800721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:03:58.801697 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:03:58.802420 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:03:58.803108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:03:58.803851 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:03:58.803884 systemd[1]: Reached target paths.target - Path Units. May 10 00:03:58.804382 systemd[1]: Reached target timers.target - Timer Units. May 10 00:03:58.807550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:03:58.810788 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:03:58.827886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:03:58.831362 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:03:58.832756 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:03:58.833590 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:03:58.834250 systemd[1]: Reached target basic.target - Basic System. May 10 00:03:58.834943 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:03:58.834978 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:03:58.847571 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:03:58.852636 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:03:58.854457 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:03:58.857597 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:03:58.861557 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:03:58.865696 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:03:58.866273 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:03:58.869260 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:03:58.874360 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:03:58.881658 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 10 00:03:58.884574 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:03:58.888571 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:03:58.899897 jq[1448]: false May 10 00:03:58.900574 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:03:58.901908 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:03:58.905556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:03:58.906996 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:03:58.911641 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:03:58.915507 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:03:58.931967 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:03:58.933450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:03:58.957423 extend-filesystems[1449]: Found loop4 May 10 00:03:58.957423 extend-filesystems[1449]: Found loop5 May 10 00:03:58.957423 extend-filesystems[1449]: Found loop6 May 10 00:03:58.957423 extend-filesystems[1449]: Found loop7 May 10 00:03:58.957423 extend-filesystems[1449]: Found sda May 10 00:03:58.969552 extend-filesystems[1449]: Found sda1 May 10 00:03:58.969552 extend-filesystems[1449]: Found sda2 May 10 00:03:58.969552 extend-filesystems[1449]: Found sda3 May 10 00:03:58.969552 extend-filesystems[1449]: Found usr May 10 00:03:58.969552 extend-filesystems[1449]: Found sda4 May 10 00:03:58.969552 extend-filesystems[1449]: Found sda6 May 10 00:03:58.969552 extend-filesystems[1449]: Found sda7 May 10 00:03:58.969552 extend-filesystems[1449]: Found sda9 May 10 00:03:58.969552 extend-filesystems[1449]: Checking size of /dev/sda9 May 10 00:03:58.994748 dbus-daemon[1447]: [system] SELinux support is enabled May 10 00:03:59.016116 coreos-metadata[1446]: May 10 00:03:59.009 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 10 00:03:59.016318 extend-filesystems[1449]: Resized partition /dev/sda9 May 10 00:03:58.976168 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:03:59.024939 tar[1465]: linux-arm64/helm May 10 00:03:59.025261 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) May 10 00:03:58.976349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:03:59.032742 jq[1459]: true May 10 00:03:59.038060 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 10 00:03:59.038162 coreos-metadata[1446]: May 10 00:03:59.026 INFO Fetch successful May 10 00:03:59.038162 coreos-metadata[1446]: May 10 00:03:59.028 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 10 00:03:59.038162 coreos-metadata[1446]: May 10 00:03:59.034 INFO Fetch successful May 10 00:03:58.991479 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:03:58.994982 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:03:59.002987 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:03:59.003017 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:03:59.013221 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:03:59.013244 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:03:59.043278 jq[1483]: true May 10 00:03:59.050256 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:03:59.050463 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:03:59.058514 update_engine[1458]: I20250510 00:03:59.057847 1458 main.cc:92] Flatcar Update Engine starting May 10 00:03:59.068240 systemd[1]: Started update-engine.service - Update Engine. May 10 00:03:59.076654 update_engine[1458]: I20250510 00:03:59.076396 1458 update_check_scheduler.cc:74] Next update check in 3m16s May 10 00:03:59.082711 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:03:59.121614 systemd-logind[1457]: New seat seat0. May 10 00:03:59.122677 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:03:59.122700 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 10 00:03:59.122902 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:03:59.183434 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1387) May 10 00:03:59.218432 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 10 00:03:59.255561 bash[1517]: Updated "/home/core/.ssh/authorized_keys" May 10 00:03:59.218908 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:03:59.224524 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:03:59.237723 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:03:59.247709 systemd[1]: Starting sshkeys.service... May 10 00:03:59.258740 extend-filesystems[1486]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:03:59.258740 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 5 May 10 00:03:59.258740 extend-filesystems[1486]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 10 00:03:59.265474 extend-filesystems[1449]: Resized filesystem in /dev/sda9 May 10 00:03:59.265474 extend-filesystems[1449]: Found sr0 May 10 00:03:59.261553 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:03:59.261721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:03:59.265371 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 00:03:59.279820 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 00:03:59.329672 coreos-metadata[1526]: May 10 00:03:59.329 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 10 00:03:59.332413 coreos-metadata[1526]: May 10 00:03:59.332 INFO Fetch successful May 10 00:03:59.335120 unknown[1526]: wrote ssh authorized keys file for user: core May 10 00:03:59.360410 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" May 10 00:03:59.360482 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 00:03:59.363269 systemd[1]: Finished sshkeys.service. May 10 00:03:59.368647 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:03:59.393654 containerd[1477]: time="2025-05-10T00:03:59.393523520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:03:59.471979 containerd[1477]: time="2025-05-10T00:03:59.471895560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.474678 containerd[1477]: time="2025-05-10T00:03:59.474624480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475456480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475502360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475680600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475698520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475758040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475770120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475947000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475961760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475975160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.475986120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.476053120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.476443 containerd[1477]: time="2025-05-10T00:03:59.476303640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:03:59.477709 containerd[1477]: time="2025-05-10T00:03:59.477501440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:03:59.477709 containerd[1477]: time="2025-05-10T00:03:59.477527680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:03:59.477709 containerd[1477]: time="2025-05-10T00:03:59.477634440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:03:59.477709 containerd[1477]: time="2025-05-10T00:03:59.477677280Z" level=info msg="metadata content store policy set" policy=shared May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484375400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484446240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484462800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484478880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484494760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484644200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484852560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484947200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484963240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484976120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.484988880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.485000520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.485011760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487414 containerd[1477]: time="2025-05-10T00:03:59.485024160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485038080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485050040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485061720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485122360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485146520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485160360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485180640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485195840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485207680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485221320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485232960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485247800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485260160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487726 containerd[1477]: time="2025-05-10T00:03:59.485273240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485284760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485296360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485308560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485333000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485353280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485365360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485376200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485504040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485523760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485534520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485546720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485557280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485569520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:03:59.487965 containerd[1477]: time="2025-05-10T00:03:59.485578960Z" level=info msg="NRI interface is disabled by configuration." May 10 00:03:59.488243 containerd[1477]: time="2025-05-10T00:03:59.485588720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:03:59.488264 containerd[1477]: time="2025-05-10T00:03:59.485935000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:03:59.488264 containerd[1477]: time="2025-05-10T00:03:59.485991760Z" level=info msg="Connect containerd service" May 10 00:03:59.488264 containerd[1477]: time="2025-05-10T00:03:59.486106920Z" level=info msg="using legacy CRI server" May 10 00:03:59.488264 containerd[1477]: time="2025-05-10T00:03:59.486116200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:03:59.488264 containerd[1477]: time="2025-05-10T00:03:59.486203040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:03:59.492430 containerd[1477]: time="2025-05-10T00:03:59.491004760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:03:59.492430 containerd[1477]: time="2025-05-10T00:03:59.491573000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:03:59.492430 containerd[1477]: time="2025-05-10T00:03:59.491615520Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:03:59.492430 containerd[1477]: time="2025-05-10T00:03:59.491752600Z" level=info msg="Start subscribing containerd event" May 10 00:03:59.492430 containerd[1477]: time="2025-05-10T00:03:59.491796360Z" level=info msg="Start recovering state" May 10 00:03:59.493531 containerd[1477]: time="2025-05-10T00:03:59.493511040Z" level=info msg="Start event monitor" May 10 00:03:59.499815 containerd[1477]: time="2025-05-10T00:03:59.499460640Z" level=info msg="Start snapshots syncer" May 10 00:03:59.499815 containerd[1477]: time="2025-05-10T00:03:59.499490880Z" level=info msg="Start cni network conf syncer for default" May 10 00:03:59.499815 containerd[1477]: time="2025-05-10T00:03:59.499500840Z" level=info msg="Start streaming server" May 10 00:03:59.499815 containerd[1477]: time="2025-05-10T00:03:59.499657720Z" level=info msg="containerd successfully booted in 0.108601s" May 10 00:03:59.499783 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:03:59.699112 tar[1465]: linux-arm64/LICENSE May 10 00:03:59.699237 tar[1465]: linux-arm64/README.md May 10 00:03:59.711219 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:03:59.924801 systemd-networkd[1377]: eth0: Gained IPv6LL May 10 00:03:59.925675 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. May 10 00:03:59.930358 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:03:59.933876 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:03:59.942847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:03:59.946699 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:03:59.988513 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:04:00.323945 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:04:00.356596 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:04:00.363515 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:04:00.380357 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:04:00.380675 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:04:00.388812 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:04:00.401370 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:04:00.413820 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:04:00.416535 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:04:00.417768 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:04:00.436755 systemd-networkd[1377]: eth1: Gained IPv6LL May 10 00:04:00.437620 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. May 10 00:04:00.680723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:00.680894 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:00.683643 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:04:00.692264 systemd[1]: Startup finished in 769ms (kernel) + 6.495s (initrd) + 4.541s (userspace) = 11.806s. May 10 00:04:01.202197 kubelet[1576]: E0510 00:04:01.202120 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:01.204911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:01.205084 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:11.455688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:04:11.464692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:11.564739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:11.569323 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:11.619971 kubelet[1596]: E0510 00:04:11.619900 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:11.623565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:11.623849 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:21.874837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:04:21.881729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:21.997065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:22.001493 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:22.038753 kubelet[1611]: E0510 00:04:22.038693 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:22.042339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:22.042658 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:30.744307 systemd-timesyncd[1355]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). May 10 00:04:30.744442 systemd-timesyncd[1355]: Initial clock synchronization to Sat 2025-05-10 00:04:31.104596 UTC. May 10 00:04:32.293799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:04:32.301798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:32.413091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:32.419061 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:32.464146 kubelet[1626]: E0510 00:04:32.464046 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:32.466743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:32.466940 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:42.606323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:04:42.613834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:42.739941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:42.745433 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:42.784032 kubelet[1641]: E0510 00:04:42.783961 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:42.786777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:42.786957 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:04:43.931506 update_engine[1458]: I20250510 00:04:43.930897 1458 update_attempter.cc:509] Updating boot flags... May 10 00:04:43.994003 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1657) May 10 00:04:52.855956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:04:52.873827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:04:52.991655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:04:52.994250 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:04:53.040035 kubelet[1671]: E0510 00:04:53.039980 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:04:53.043294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:04:53.043509 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:03.105880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 10 00:05:03.116803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:03.228545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:03.240265 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:03.289176 kubelet[1687]: E0510 00:05:03.288997 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:03.292794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:03.293086 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:13.356393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 10 00:05:13.363836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:13.473629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:13.476376 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:13.517973 kubelet[1702]: E0510 00:05:13.517868 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:13.520949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:13.521210 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:23.605704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 10 00:05:23.613808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:23.722746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:23.736969 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:23.784574 kubelet[1716]: E0510 00:05:23.784491 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:23.789548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:23.790126 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:33.856394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 10 00:05:33.867045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:33.976266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:33.985906 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:34.031206 kubelet[1732]: E0510 00:05:34.031102 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:34.033719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:34.033887 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:44.105773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 10 00:05:44.112713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:44.217465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:44.230108 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:44.257840 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:05:44.265634 systemd[1]: Started sshd@0-138.199.169.250:22-147.75.109.163:60082.service - OpenSSH per-connection server daemon (147.75.109.163:60082). May 10 00:05:44.279457 kubelet[1747]: E0510 00:05:44.276820 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:44.279583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:44.279716 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:45.284937 sshd[1754]: Accepted publickey for core from 147.75.109.163 port 60082 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:45.288166 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:45.302540 systemd-logind[1457]: New session 1 of user core. May 10 00:05:45.304434 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:05:45.316758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:05:45.332882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:05:45.344219 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:05:45.349022 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:05:45.459207 systemd[1760]: Queued start job for default target default.target. May 10 00:05:45.466707 systemd[1760]: Created slice app.slice - User Application Slice. May 10 00:05:45.466776 systemd[1760]: Reached target paths.target - Paths. May 10 00:05:45.466809 systemd[1760]: Reached target timers.target - Timers. May 10 00:05:45.469480 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:05:45.484091 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:05:45.484388 systemd[1760]: Reached target sockets.target - Sockets. May 10 00:05:45.484441 systemd[1760]: Reached target basic.target - Basic System. May 10 00:05:45.484528 systemd[1760]: Reached target default.target - Main User Target. May 10 00:05:45.484581 systemd[1760]: Startup finished in 128ms. May 10 00:05:45.484662 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:05:45.491715 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:05:46.206322 systemd[1]: Started sshd@1-138.199.169.250:22-147.75.109.163:60090.service - OpenSSH per-connection server daemon (147.75.109.163:60090). May 10 00:05:47.223028 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 60090 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:47.225488 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:47.232179 systemd-logind[1457]: New session 2 of user core. May 10 00:05:47.239792 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:05:47.927274 sshd[1771]: pam_unix(sshd:session): session closed for user core May 10 00:05:47.932026 systemd[1]: sshd@1-138.199.169.250:22-147.75.109.163:60090.service: Deactivated successfully. May 10 00:05:47.934854 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:05:47.936863 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. May 10 00:05:47.938064 systemd-logind[1457]: Removed session 2. May 10 00:05:48.104846 systemd[1]: Started sshd@2-138.199.169.250:22-147.75.109.163:34752.service - OpenSSH per-connection server daemon (147.75.109.163:34752). May 10 00:05:49.096261 sshd[1778]: Accepted publickey for core from 147.75.109.163 port 34752 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:49.098575 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:49.104109 systemd-logind[1457]: New session 3 of user core. May 10 00:05:49.111714 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:05:49.784259 sshd[1778]: pam_unix(sshd:session): session closed for user core May 10 00:05:49.789902 systemd[1]: sshd@2-138.199.169.250:22-147.75.109.163:34752.service: Deactivated successfully. May 10 00:05:49.793341 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:05:49.795276 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. May 10 00:05:49.796397 systemd-logind[1457]: Removed session 3. May 10 00:05:49.968883 systemd[1]: Started sshd@3-138.199.169.250:22-147.75.109.163:34756.service - OpenSSH per-connection server daemon (147.75.109.163:34756). May 10 00:05:50.977377 sshd[1785]: Accepted publickey for core from 147.75.109.163 port 34756 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:50.979602 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:50.985608 systemd-logind[1457]: New session 4 of user core. May 10 00:05:50.990709 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:05:51.680577 sshd[1785]: pam_unix(sshd:session): session closed for user core May 10 00:05:51.685876 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. May 10 00:05:51.686429 systemd[1]: sshd@3-138.199.169.250:22-147.75.109.163:34756.service: Deactivated successfully. May 10 00:05:51.689002 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:05:51.691442 systemd-logind[1457]: Removed session 4. May 10 00:05:51.851849 systemd[1]: Started sshd@4-138.199.169.250:22-147.75.109.163:34762.service - OpenSSH per-connection server daemon (147.75.109.163:34762). May 10 00:05:52.851681 sshd[1792]: Accepted publickey for core from 147.75.109.163 port 34762 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:52.854038 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:52.860197 systemd-logind[1457]: New session 5 of user core. May 10 00:05:52.868853 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:05:53.393483 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:05:53.393763 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:53.412297 sudo[1795]: pam_unix(sudo:session): session closed for user root May 10 00:05:53.575918 sshd[1792]: pam_unix(sshd:session): session closed for user core May 10 00:05:53.581931 systemd[1]: sshd@4-138.199.169.250:22-147.75.109.163:34762.service: Deactivated successfully. May 10 00:05:53.584901 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:05:53.587322 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. May 10 00:05:53.588891 systemd-logind[1457]: Removed session 5. May 10 00:05:53.754868 systemd[1]: Started sshd@5-138.199.169.250:22-147.75.109.163:34774.service - OpenSSH per-connection server daemon (147.75.109.163:34774). May 10 00:05:54.355881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 10 00:05:54.362794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:05:54.479625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:05:54.492977 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:05:54.533942 kubelet[1810]: E0510 00:05:54.533860 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:05:54.535657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:05:54.535833 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:05:54.755032 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 34774 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:54.757293 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:54.763863 systemd-logind[1457]: New session 6 of user core. May 10 00:05:54.777752 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:05:55.288729 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:05:55.289108 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:55.293392 sudo[1818]: pam_unix(sudo:session): session closed for user root May 10 00:05:55.299862 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:05:55.300250 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:55.322911 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:05:55.326351 auditctl[1821]: No rules May 10 00:05:55.327012 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:05:55.327195 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:05:55.331307 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:05:55.368642 augenrules[1839]: No rules May 10 00:05:55.371577 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:05:55.373011 sudo[1817]: pam_unix(sudo:session): session closed for user root May 10 00:05:55.535838 sshd[1800]: pam_unix(sshd:session): session closed for user core May 10 00:05:55.541435 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. May 10 00:05:55.542646 systemd[1]: sshd@5-138.199.169.250:22-147.75.109.163:34774.service: Deactivated successfully. May 10 00:05:55.544968 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:05:55.546867 systemd-logind[1457]: Removed session 6. May 10 00:05:55.720866 systemd[1]: Started sshd@6-138.199.169.250:22-147.75.109.163:34776.service - OpenSSH per-connection server daemon (147.75.109.163:34776). May 10 00:05:56.734830 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 34776 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:05:56.737723 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:05:56.743976 systemd-logind[1457]: New session 7 of user core. May 10 00:05:56.751819 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:05:57.273810 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:05:57.274117 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:05:57.348186 systemd[1]: Started sshd@7-138.199.169.250:22-103.226.249.77:54029.service - OpenSSH per-connection server daemon (103.226.249.77:54029). May 10 00:05:57.607075 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:05:57.607445 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:05:57.873696 dockerd[1869]: time="2025-05-10T00:05:57.873048848Z" level=info msg="Starting up" May 10 00:05:57.967978 dockerd[1869]: time="2025-05-10T00:05:57.967882635Z" level=info msg="Loading containers: start." May 10 00:05:58.094437 kernel: Initializing XFRM netlink socket May 10 00:05:58.174214 systemd-networkd[1377]: docker0: Link UP May 10 00:05:58.197014 dockerd[1869]: time="2025-05-10T00:05:58.196920339Z" level=info msg="Loading containers: done." May 10 00:05:58.210574 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3159831607-merged.mount: Deactivated successfully. May 10 00:05:58.213124 dockerd[1869]: time="2025-05-10T00:05:58.212686513Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:05:58.213124 dockerd[1869]: time="2025-05-10T00:05:58.212797967Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:05:58.213124 dockerd[1869]: time="2025-05-10T00:05:58.212903181Z" level=info msg="Daemon has completed initialization" May 10 00:05:58.248670 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:05:58.249499 dockerd[1869]: time="2025-05-10T00:05:58.248330525Z" level=info msg="API listen on /run/docker.sock" May 10 00:05:58.423141 sshd[1860]: Connection closed by authenticating user root 103.226.249.77 port 54029 [preauth] May 10 00:05:58.424824 systemd[1]: sshd@7-138.199.169.250:22-103.226.249.77:54029.service: Deactivated successfully. May 10 00:05:59.280156 containerd[1477]: time="2025-05-10T00:05:59.279872936Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:05:59.921498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756257276.mount: Deactivated successfully. May 10 00:06:00.764342 containerd[1477]: time="2025-05-10T00:06:00.764254041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.766509 containerd[1477]: time="2025-05-10T00:06:00.766439165Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554700" May 10 00:06:00.766821 containerd[1477]: time="2025-05-10T00:06:00.766751925Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.769990 containerd[1477]: time="2025-05-10T00:06:00.769915016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:00.771587 containerd[1477]: time="2025-05-10T00:06:00.771244828Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.491307404s" May 10 00:06:00.771587 containerd[1477]: time="2025-05-10T00:06:00.771296875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 10 00:06:00.772267 containerd[1477]: time="2025-05-10T00:06:00.772241638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:06:01.905881 containerd[1477]: time="2025-05-10T00:06:01.904740763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.907729 containerd[1477]: time="2025-05-10T00:06:01.907630894Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458998" May 10 00:06:01.908725 containerd[1477]: time="2025-05-10T00:06:01.908614620Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.913030 containerd[1477]: time="2025-05-10T00:06:01.912944376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:01.915092 containerd[1477]: time="2025-05-10T00:06:01.914551222Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.142074834s" May 10 00:06:01.915092 containerd[1477]: time="2025-05-10T00:06:01.914628152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 10 00:06:01.915602 containerd[1477]: time="2025-05-10T00:06:01.915561592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:06:02.882351 containerd[1477]: time="2025-05-10T00:06:02.882265312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.884377 containerd[1477]: time="2025-05-10T00:06:02.884326855Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125833" May 10 00:06:02.885231 containerd[1477]: time="2025-05-10T00:06:02.884854122Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.888990 containerd[1477]: time="2025-05-10T00:06:02.888904756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:02.890889 containerd[1477]: time="2025-05-10T00:06:02.890246447Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 974.639609ms" May 10 00:06:02.890889 containerd[1477]: time="2025-05-10T00:06:02.890294333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 10 00:06:02.891463 containerd[1477]: time="2025-05-10T00:06:02.891423637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:06:03.909809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869444919.mount: Deactivated successfully. May 10 00:06:04.212664 containerd[1477]: time="2025-05-10T00:06:04.212495762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.214591 containerd[1477]: time="2025-05-10T00:06:04.214346353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871943" May 10 00:06:04.215699 containerd[1477]: time="2025-05-10T00:06:04.215653836Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.218844 containerd[1477]: time="2025-05-10T00:06:04.218767185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:04.219860 containerd[1477]: time="2025-05-10T00:06:04.219652935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.328043314s" May 10 00:06:04.219860 containerd[1477]: time="2025-05-10T00:06:04.219702101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 10 00:06:04.220478 containerd[1477]: time="2025-05-10T00:06:04.220451435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:06:04.606131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 10 00:06:04.615254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:04.734826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:04.740171 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:04.789548 kubelet[2089]: E0510 00:06:04.789484 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:04.791820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:04.792124 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:04.892788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484471635.mount: Deactivated successfully. May 10 00:06:05.630939 containerd[1477]: time="2025-05-10T00:06:05.630678767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.633057 containerd[1477]: time="2025-05-10T00:06:05.633002695Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" May 10 00:06:05.633757 containerd[1477]: time="2025-05-10T00:06:05.633140192Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.636968 containerd[1477]: time="2025-05-10T00:06:05.636879734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:05.638271 containerd[1477]: time="2025-05-10T00:06:05.638202858Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.417715379s" May 10 00:06:05.638271 containerd[1477]: time="2025-05-10T00:06:05.638250984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:06:05.639834 containerd[1477]: time="2025-05-10T00:06:05.639223024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:06:06.185720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444270032.mount: Deactivated successfully. May 10 00:06:06.193596 containerd[1477]: time="2025-05-10T00:06:06.193435542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.194956 containerd[1477]: time="2025-05-10T00:06:06.194911483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 10 00:06:06.195606 containerd[1477]: time="2025-05-10T00:06:06.195187197Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.198487 containerd[1477]: time="2025-05-10T00:06:06.198439316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:06.199612 containerd[1477]: time="2025-05-10T00:06:06.199575615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.292584ms" May 10 00:06:06.199745 containerd[1477]: time="2025-05-10T00:06:06.199725994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 10 00:06:06.200266 containerd[1477]: time="2025-05-10T00:06:06.200233296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:06:06.784232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703823372.mount: Deactivated successfully. May 10 00:06:10.085220 containerd[1477]: time="2025-05-10T00:06:10.085157006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.086693 containerd[1477]: time="2025-05-10T00:06:10.086656842Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 10 00:06:10.087431 containerd[1477]: time="2025-05-10T00:06:10.086959648Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.092154 containerd[1477]: time="2025-05-10T00:06:10.092093484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:10.095064 containerd[1477]: time="2025-05-10T00:06:10.094824744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.894443554s" May 10 00:06:10.095064 containerd[1477]: time="2025-05-10T00:06:10.094905175Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 10 00:06:14.856169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 10 00:06:14.863739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:14.990094 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:06:14.990596 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:06:14.991253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.003987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.043882 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-7.scope)... May 10 00:06:15.044060 systemd[1]: Reloading... May 10 00:06:15.155502 zram_generator::config[2272]: No configuration found. May 10 00:06:15.258542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:15.330440 systemd[1]: Reloading finished in 285 ms. May 10 00:06:15.379937 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:06:15.380029 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:06:15.381464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.387986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:15.510913 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:15.511601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:15.555304 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:15.555304 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:15.555304 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:15.556004 kubelet[2320]: I0510 00:06:15.555397 2320 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:16.766076 kubelet[2320]: I0510 00:06:16.765998 2320 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:06:16.766076 kubelet[2320]: I0510 00:06:16.766055 2320 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:16.766754 kubelet[2320]: I0510 00:06:16.766589 2320 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:06:16.796807 kubelet[2320]: E0510 00:06:16.796711 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.169.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:16.799822 kubelet[2320]: I0510 00:06:16.799534 2320 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:16.812511 kubelet[2320]: E0510 00:06:16.812461 2320 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:06:16.812702 kubelet[2320]: I0510 00:06:16.812684 2320 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:06:16.817477 kubelet[2320]: I0510 00:06:16.817445 2320 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:16.819436 kubelet[2320]: I0510 00:06:16.818765 2320 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:06:16.819436 kubelet[2320]: I0510 00:06:16.818947 2320 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:16.819436 kubelet[2320]: I0510 00:06:16.818987 2320 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-025f904aa2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:06:16.819436 kubelet[2320]: I0510 00:06:16.819354 2320 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:16.819657 kubelet[2320]: I0510 00:06:16.819365 2320 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:06:16.819886 kubelet[2320]: I0510 00:06:16.819867 2320 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:16.822990 kubelet[2320]: I0510 00:06:16.822943 2320 kubelet.go:408] "Attempting to sync node with API server" May 10 00:06:16.823095 kubelet[2320]: I0510 00:06:16.823000 2320 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:16.823095 kubelet[2320]: I0510 00:06:16.823090 2320 kubelet.go:314] "Adding apiserver pod source" May 10 00:06:16.823140 kubelet[2320]: I0510 00:06:16.823101 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:16.824263 kubelet[2320]: W0510 00:06:16.824195 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.169.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-025f904aa2&limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:16.824439 kubelet[2320]: E0510 00:06:16.824393 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.169.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-025f904aa2&limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:16.825996 kubelet[2320]: W0510 00:06:16.825944 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.169.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:16.826135 kubelet[2320]: E0510 00:06:16.826113 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.169.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:16.826315 kubelet[2320]: I0510 00:06:16.826298 2320 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:16.828348 kubelet[2320]: I0510 00:06:16.828313 2320 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:16.829449 kubelet[2320]: W0510 00:06:16.829428 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:06:16.831483 kubelet[2320]: I0510 00:06:16.831457 2320 server.go:1269] "Started kubelet" May 10 00:06:16.832900 kubelet[2320]: I0510 00:06:16.832860 2320 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:16.834185 kubelet[2320]: I0510 00:06:16.834150 2320 server.go:460] "Adding debug handlers to kubelet server" May 10 00:06:16.835930 kubelet[2320]: I0510 00:06:16.835317 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:16.835930 kubelet[2320]: I0510 00:06:16.835649 2320 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:16.837321 kubelet[2320]: I0510 00:06:16.837287 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:16.837587 kubelet[2320]: E0510 00:06:16.835824 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.169.250:6443/api/v1/namespaces/default/events\": dial tcp 138.199.169.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-025f904aa2.183e01b3b60d18c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-025f904aa2,UID:ci-4081-3-3-n-025f904aa2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-025f904aa2,},FirstTimestamp:2025-05-10 00:06:16.83141652 +0000 UTC m=+1.314642143,LastTimestamp:2025-05-10 00:06:16.83141652 +0000 UTC m=+1.314642143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-025f904aa2,}" May 10 00:06:16.840526 kubelet[2320]: I0510 00:06:16.839645 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:06:16.843528 kubelet[2320]: E0510 00:06:16.843508 2320 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:16.843930 kubelet[2320]: E0510 00:06:16.843889 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-025f904aa2\" not found" May 10 00:06:16.844223 kubelet[2320]: I0510 00:06:16.844210 2320 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:06:16.844542 kubelet[2320]: I0510 00:06:16.844522 2320 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:06:16.844679 kubelet[2320]: I0510 00:06:16.844668 2320 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:16.845528 kubelet[2320]: W0510 00:06:16.845481 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.169.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:16.845653 kubelet[2320]: E0510 00:06:16.845635 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.169.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:16.846004 kubelet[2320]: I0510 00:06:16.845983 2320 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:16.846177 kubelet[2320]: I0510 00:06:16.846157 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:16.849131 kubelet[2320]: E0510 00:06:16.849089 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.169.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-025f904aa2?timeout=10s\": dial tcp 138.199.169.250:6443: connect: connection refused" interval="200ms" May 10 00:06:16.849891 kubelet[2320]: I0510 00:06:16.849866 2320 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:16.859798 kubelet[2320]: I0510 00:06:16.859726 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:16.861026 kubelet[2320]: I0510 00:06:16.860970 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:16.861026 kubelet[2320]: I0510 00:06:16.861002 2320 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:16.861026 kubelet[2320]: I0510 00:06:16.861022 2320 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:06:16.861160 kubelet[2320]: E0510 00:06:16.861067 2320 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:16.870434 kubelet[2320]: W0510 00:06:16.869631 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.169.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:16.870434 kubelet[2320]: E0510 00:06:16.869718 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.169.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:16.877599 kubelet[2320]: I0510 00:06:16.877566 2320 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:16.877770 kubelet[2320]: I0510 00:06:16.877753 2320 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:16.877886 kubelet[2320]: I0510 00:06:16.877873 2320 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:16.880169 kubelet[2320]: I0510 00:06:16.880145 2320 policy_none.go:49] "None policy: Start" May 10 00:06:16.881509 kubelet[2320]: I0510 00:06:16.881464 2320 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:16.881603 kubelet[2320]: I0510 00:06:16.881514 2320 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:16.890712 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 00:06:16.903538 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 00:06:16.912114 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 00:06:16.921976 kubelet[2320]: I0510 00:06:16.921052 2320 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:16.921976 kubelet[2320]: I0510 00:06:16.921467 2320 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:06:16.921976 kubelet[2320]: I0510 00:06:16.921492 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:16.921976 kubelet[2320]: I0510 00:06:16.921836 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:16.926847 kubelet[2320]: E0510 00:06:16.926822 2320 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-025f904aa2\" not found" May 10 00:06:16.975972 systemd[1]: Created slice kubepods-burstable-pod95a57406e572e7f4b554ff31157b2e51.slice - libcontainer container kubepods-burstable-pod95a57406e572e7f4b554ff31157b2e51.slice. May 10 00:06:17.003679 systemd[1]: Created slice kubepods-burstable-pod3d4370b56db256e1f5bbc13e94948d6b.slice - libcontainer container kubepods-burstable-pod3d4370b56db256e1f5bbc13e94948d6b.slice. May 10 00:06:17.025547 kubelet[2320]: I0510 00:06:17.024841 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.025547 kubelet[2320]: E0510 00:06:17.025366 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.169.250:6443/api/v1/nodes\": dial tcp 138.199.169.250:6443: connect: connection refused" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.030154 systemd[1]: Created slice kubepods-burstable-poda4a14425fae56e81ed5745a9257a079d.slice - libcontainer container kubepods-burstable-poda4a14425fae56e81ed5745a9257a079d.slice. May 10 00:06:17.046372 kubelet[2320]: I0510 00:06:17.046307 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046372 kubelet[2320]: I0510 00:06:17.046361 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046605 kubelet[2320]: I0510 00:06:17.046391 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4a14425fae56e81ed5745a9257a079d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-025f904aa2\" (UID: \"a4a14425fae56e81ed5745a9257a079d\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046605 kubelet[2320]: I0510 00:06:17.046564 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046605 kubelet[2320]: I0510 00:06:17.046592 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046711 kubelet[2320]: I0510 00:06:17.046646 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046746 kubelet[2320]: I0510 00:06:17.046674 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046786 kubelet[2320]: I0510 00:06:17.046744 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.046901 kubelet[2320]: I0510 00:06:17.046833 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:17.049764 kubelet[2320]: E0510 00:06:17.049716 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.169.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-025f904aa2?timeout=10s\": dial tcp 138.199.169.250:6443: connect: connection refused" interval="400ms" May 10 00:06:17.229214 kubelet[2320]: I0510 00:06:17.229133 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.229696 kubelet[2320]: E0510 00:06:17.229612 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.169.250:6443/api/v1/nodes\": dial tcp 138.199.169.250:6443: connect: connection refused" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.303635 containerd[1477]: time="2025-05-10T00:06:17.303167154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-025f904aa2,Uid:95a57406e572e7f4b554ff31157b2e51,Namespace:kube-system,Attempt:0,}" May 10 00:06:17.324574 containerd[1477]: time="2025-05-10T00:06:17.324089737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-025f904aa2,Uid:3d4370b56db256e1f5bbc13e94948d6b,Namespace:kube-system,Attempt:0,}" May 10 00:06:17.334953 containerd[1477]: time="2025-05-10T00:06:17.334357802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-025f904aa2,Uid:a4a14425fae56e81ed5745a9257a079d,Namespace:kube-system,Attempt:0,}" May 10 00:06:17.451391 kubelet[2320]: E0510 00:06:17.450798 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.169.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-025f904aa2?timeout=10s\": dial tcp 138.199.169.250:6443: connect: connection refused" interval="800ms" May 10 00:06:17.632708 kubelet[2320]: I0510 00:06:17.632626 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.633854 kubelet[2320]: E0510 00:06:17.633765 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.169.250:6443/api/v1/nodes\": dial tcp 138.199.169.250:6443: connect: connection refused" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:17.890844 kubelet[2320]: W0510 00:06:17.890587 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.169.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:17.890844 kubelet[2320]: E0510 00:06:17.890686 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.169.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:17.903605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483552179.mount: Deactivated successfully. May 10 00:06:17.910130 containerd[1477]: time="2025-05-10T00:06:17.909994285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:17.911592 containerd[1477]: time="2025-05-10T00:06:17.911542454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 10 00:06:17.915266 containerd[1477]: time="2025-05-10T00:06:17.915194033Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:17.916741 containerd[1477]: time="2025-05-10T00:06:17.916697325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:17.918397 containerd[1477]: time="2025-05-10T00:06:17.918219017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:17.919549 containerd[1477]: time="2025-05-10T00:06:17.919503605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:17.921041 containerd[1477]: time="2025-05-10T00:06:17.920910264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:17.922120 containerd[1477]: time="2025-05-10T00:06:17.921994386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:17.925018 containerd[1477]: time="2025-05-10T00:06:17.924765308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 621.455045ms" May 10 00:06:17.927393 containerd[1477]: time="2025-05-10T00:06:17.927295847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.477279ms" May 10 00:06:17.927517 containerd[1477]: time="2025-05-10T00:06:17.927491793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.214589ms" May 10 00:06:18.033717 kubelet[2320]: W0510 00:06:18.033643 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.169.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-025f904aa2&limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:18.034276 kubelet[2320]: E0510 00:06:18.033722 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.169.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-025f904aa2&limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:18.047375 containerd[1477]: time="2025-05-10T00:06:18.046879747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.047375 containerd[1477]: time="2025-05-10T00:06:18.046945143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.047375 containerd[1477]: time="2025-05-10T00:06:18.046961502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.047375 containerd[1477]: time="2025-05-10T00:06:18.047076454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.048060829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051767901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051780740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051850376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051466521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051549036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051561395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.052556 containerd[1477]: time="2025-05-10T00:06:18.051663628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:18.082652 systemd[1]: Started cri-containerd-0a19bb542a5576b0386d822cafac70fe881de51ebc7a13e86a3a6a7011fea56a.scope - libcontainer container 0a19bb542a5576b0386d822cafac70fe881de51ebc7a13e86a3a6a7011fea56a. May 10 00:06:18.085098 systemd[1]: Started cri-containerd-89c4bae1557295bfaace59320fd50e0f7995d8b3394f80216e08b00a38d87cdd.scope - libcontainer container 89c4bae1557295bfaace59320fd50e0f7995d8b3394f80216e08b00a38d87cdd. May 10 00:06:18.088292 systemd[1]: Started cri-containerd-93eb49dbec8aff17cc0736ef1727852f35969e2a6a891c48d4a153593ba53300.scope - libcontainer container 93eb49dbec8aff17cc0736ef1727852f35969e2a6a891c48d4a153593ba53300. May 10 00:06:18.146914 containerd[1477]: time="2025-05-10T00:06:18.146645008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-025f904aa2,Uid:a4a14425fae56e81ed5745a9257a079d,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c4bae1557295bfaace59320fd50e0f7995d8b3394f80216e08b00a38d87cdd\"" May 10 00:06:18.157281 containerd[1477]: time="2025-05-10T00:06:18.157025915Z" level=info msg="CreateContainer within sandbox \"89c4bae1557295bfaace59320fd50e0f7995d8b3394f80216e08b00a38d87cdd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:06:18.157984 containerd[1477]: time="2025-05-10T00:06:18.157909616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-025f904aa2,Uid:95a57406e572e7f4b554ff31157b2e51,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a19bb542a5576b0386d822cafac70fe881de51ebc7a13e86a3a6a7011fea56a\"" May 10 00:06:18.162705 containerd[1477]: time="2025-05-10T00:06:18.162659938Z" level=info msg="CreateContainer within sandbox \"0a19bb542a5576b0386d822cafac70fe881de51ebc7a13e86a3a6a7011fea56a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:06:18.166426 containerd[1477]: time="2025-05-10T00:06:18.166367011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-025f904aa2,Uid:3d4370b56db256e1f5bbc13e94948d6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"93eb49dbec8aff17cc0736ef1727852f35969e2a6a891c48d4a153593ba53300\"" May 10 00:06:18.170495 containerd[1477]: time="2025-05-10T00:06:18.170390702Z" level=info msg="CreateContainer within sandbox \"93eb49dbec8aff17cc0736ef1727852f35969e2a6a891c48d4a153593ba53300\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:06:18.179497 containerd[1477]: time="2025-05-10T00:06:18.179376183Z" level=info msg="CreateContainer within sandbox \"89c4bae1557295bfaace59320fd50e0f7995d8b3394f80216e08b00a38d87cdd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d4982bbce90e8510ec797d7aaef87cda2336167e27b061c3cf0aabf27580b1e8\"" May 10 00:06:18.180836 containerd[1477]: time="2025-05-10T00:06:18.180798448Z" level=info msg="StartContainer for \"d4982bbce90e8510ec797d7aaef87cda2336167e27b061c3cf0aabf27580b1e8\"" May 10 00:06:18.188796 containerd[1477]: time="2025-05-10T00:06:18.188748797Z" level=info msg="CreateContainer within sandbox \"0a19bb542a5576b0386d822cafac70fe881de51ebc7a13e86a3a6a7011fea56a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"759cc0b3d4d2c01cd6ce76e4a9e46f535a6c75fd18300a67a5397cadec78d49b\"" May 10 00:06:18.190017 containerd[1477]: time="2025-05-10T00:06:18.189425792Z" level=info msg="StartContainer for \"759cc0b3d4d2c01cd6ce76e4a9e46f535a6c75fd18300a67a5397cadec78d49b\"" May 10 00:06:18.196805 containerd[1477]: time="2025-05-10T00:06:18.196570115Z" level=info msg="CreateContainer within sandbox \"93eb49dbec8aff17cc0736ef1727852f35969e2a6a891c48d4a153593ba53300\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2660aa8815acc76612d8165a1ab2c320cccb7896759cee8903fcbfd1b3587b7b\"" May 10 00:06:18.198376 containerd[1477]: time="2025-05-10T00:06:18.198339517Z" level=info msg="StartContainer for \"2660aa8815acc76612d8165a1ab2c320cccb7896759cee8903fcbfd1b3587b7b\"" May 10 00:06:18.219288 kubelet[2320]: W0510 00:06:18.218879 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.169.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:18.219288 kubelet[2320]: E0510 00:06:18.218957 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.169.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:18.219014 systemd[1]: Started cri-containerd-d4982bbce90e8510ec797d7aaef87cda2336167e27b061c3cf0aabf27580b1e8.scope - libcontainer container d4982bbce90e8510ec797d7aaef87cda2336167e27b061c3cf0aabf27580b1e8. May 10 00:06:18.243852 kubelet[2320]: W0510 00:06:18.243789 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.169.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.169.250:6443: connect: connection refused May 10 00:06:18.244035 kubelet[2320]: E0510 00:06:18.243882 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.169.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.169.250:6443: connect: connection refused" logger="UnhandledError" May 10 00:06:18.246858 systemd[1]: Started cri-containerd-759cc0b3d4d2c01cd6ce76e4a9e46f535a6c75fd18300a67a5397cadec78d49b.scope - libcontainer container 759cc0b3d4d2c01cd6ce76e4a9e46f535a6c75fd18300a67a5397cadec78d49b. May 10 00:06:18.254112 kubelet[2320]: E0510 00:06:18.253633 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.169.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-025f904aa2?timeout=10s\": dial tcp 138.199.169.250:6443: connect: connection refused" interval="1.6s" May 10 00:06:18.265433 systemd[1]: Started cri-containerd-2660aa8815acc76612d8165a1ab2c320cccb7896759cee8903fcbfd1b3587b7b.scope - libcontainer container 2660aa8815acc76612d8165a1ab2c320cccb7896759cee8903fcbfd1b3587b7b. May 10 00:06:18.277268 containerd[1477]: time="2025-05-10T00:06:18.275878340Z" level=info msg="StartContainer for \"d4982bbce90e8510ec797d7aaef87cda2336167e27b061c3cf0aabf27580b1e8\" returns successfully" May 10 00:06:18.308603 containerd[1477]: time="2025-05-10T00:06:18.308376371Z" level=info msg="StartContainer for \"759cc0b3d4d2c01cd6ce76e4a9e46f535a6c75fd18300a67a5397cadec78d49b\" returns successfully" May 10 00:06:18.328196 containerd[1477]: time="2025-05-10T00:06:18.327048444Z" level=info msg="StartContainer for \"2660aa8815acc76612d8165a1ab2c320cccb7896759cee8903fcbfd1b3587b7b\" returns successfully" May 10 00:06:18.436737 kubelet[2320]: I0510 00:06:18.436623 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:18.436997 kubelet[2320]: E0510 00:06:18.436963 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.169.250:6443/api/v1/nodes\": dial tcp 138.199.169.250:6443: connect: connection refused" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:20.039066 kubelet[2320]: I0510 00:06:20.038980 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:20.937383 kubelet[2320]: I0510 00:06:20.937340 2320 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:20.937383 kubelet[2320]: E0510 00:06:20.937383 2320 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-025f904aa2\": node \"ci-4081-3-3-n-025f904aa2\" not found" May 10 00:06:21.829769 kubelet[2320]: I0510 00:06:21.829715 2320 apiserver.go:52] "Watching apiserver" May 10 00:06:21.845701 kubelet[2320]: I0510 00:06:21.845651 2320 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:06:22.976001 systemd[1]: Reloading requested from client PID 2593 ('systemctl') (unit session-7.scope)... May 10 00:06:22.976581 systemd[1]: Reloading... May 10 00:06:23.087439 zram_generator::config[2633]: No configuration found. May 10 00:06:23.189199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:23.272328 systemd[1]: Reloading finished in 295 ms. May 10 00:06:23.318693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:23.330030 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:23.330628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:23.330835 systemd[1]: kubelet.service: Consumed 1.744s CPU time, 113.6M memory peak, 0B memory swap peak. May 10 00:06:23.338778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:23.458633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:23.463162 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:23.511166 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:23.512454 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:23.512454 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:23.512454 kubelet[2677]: I0510 00:06:23.511589 2677 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:23.520639 kubelet[2677]: I0510 00:06:23.520599 2677 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:06:23.520779 kubelet[2677]: I0510 00:06:23.520768 2677 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:23.521707 kubelet[2677]: I0510 00:06:23.521270 2677 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:06:23.528509 kubelet[2677]: I0510 00:06:23.527066 2677 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:06:23.533723 kubelet[2677]: I0510 00:06:23.533693 2677 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:23.540470 kubelet[2677]: E0510 00:06:23.538677 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:06:23.540470 kubelet[2677]: I0510 00:06:23.538720 2677 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:06:23.541009 kubelet[2677]: I0510 00:06:23.540916 2677 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:23.541113 kubelet[2677]: I0510 00:06:23.541038 2677 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:06:23.541173 kubelet[2677]: I0510 00:06:23.541122 2677 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:23.541357 kubelet[2677]: I0510 00:06:23.541141 2677 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-025f904aa2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:06:23.541357 kubelet[2677]: I0510 00:06:23.541315 2677 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:23.541357 kubelet[2677]: I0510 00:06:23.541325 2677 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:06:23.541357 kubelet[2677]: I0510 00:06:23.541356 2677 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:23.541707 kubelet[2677]: I0510 00:06:23.541506 2677 kubelet.go:408] "Attempting to sync node with API server" May 10 00:06:23.541707 kubelet[2677]: I0510 00:06:23.541521 2677 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:23.541707 kubelet[2677]: I0510 00:06:23.541544 2677 kubelet.go:314] "Adding apiserver pod source" May 10 00:06:23.541707 kubelet[2677]: I0510 00:06:23.541555 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:23.546905 kubelet[2677]: I0510 00:06:23.545781 2677 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:06:23.546905 kubelet[2677]: I0510 00:06:23.546313 2677 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:23.546905 kubelet[2677]: I0510 00:06:23.546749 2677 server.go:1269] "Started kubelet" May 10 00:06:23.549941 kubelet[2677]: I0510 00:06:23.549790 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:23.558429 kubelet[2677]: I0510 00:06:23.558295 2677 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:23.559354 kubelet[2677]: I0510 00:06:23.559327 2677 server.go:460] "Adding debug handlers to kubelet server" May 10 00:06:23.568356 kubelet[2677]: I0510 00:06:23.568200 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:23.569362 kubelet[2677]: I0510 00:06:23.569330 2677 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:23.569850 kubelet[2677]: I0510 00:06:23.569776 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:06:23.571947 kubelet[2677]: I0510 00:06:23.571919 2677 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:06:23.573037 kubelet[2677]: E0510 00:06:23.573009 2677 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-025f904aa2\" not found" May 10 00:06:23.579890 kubelet[2677]: I0510 00:06:23.579857 2677 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:06:23.580145 kubelet[2677]: I0510 00:06:23.580132 2677 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:23.584340 kubelet[2677]: E0510 00:06:23.584290 2677 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:23.585134 kubelet[2677]: I0510 00:06:23.585103 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:23.587850 kubelet[2677]: I0510 00:06:23.587530 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:23.590153 kubelet[2677]: I0510 00:06:23.588350 2677 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:23.590300 kubelet[2677]: I0510 00:06:23.590286 2677 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:06:23.590436 kubelet[2677]: E0510 00:06:23.590393 2677 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:23.590937 kubelet[2677]: I0510 00:06:23.590903 2677 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:23.591030 kubelet[2677]: I0510 00:06:23.591005 2677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:23.597220 kubelet[2677]: I0510 00:06:23.596267 2677 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:23.666663 kubelet[2677]: I0510 00:06:23.666628 2677 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:23.666663 kubelet[2677]: I0510 00:06:23.666656 2677 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:23.666875 kubelet[2677]: I0510 00:06:23.666683 2677 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:23.666915 kubelet[2677]: I0510 00:06:23.666884 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:06:23.666915 kubelet[2677]: I0510 00:06:23.666898 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:06:23.666960 kubelet[2677]: I0510 00:06:23.666919 2677 policy_none.go:49] "None policy: Start" May 10 00:06:23.667885 kubelet[2677]: I0510 00:06:23.667859 2677 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:23.667995 kubelet[2677]: I0510 00:06:23.667896 2677 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:23.668143 kubelet[2677]: I0510 00:06:23.668125 2677 state_mem.go:75] "Updated machine memory state" May 10 00:06:23.672572 kubelet[2677]: I0510 00:06:23.672546 2677 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:23.673158 kubelet[2677]: I0510 00:06:23.672726 2677 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:06:23.673158 kubelet[2677]: I0510 00:06:23.672744 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:23.673158 kubelet[2677]: I0510 00:06:23.673017 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:23.704858 kubelet[2677]: E0510 00:06:23.704771 2677 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.778809 kubelet[2677]: I0510 00:06:23.778699 2677 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:23.781758 kubelet[2677]: I0510 00:06:23.781387 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4a14425fae56e81ed5745a9257a079d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-025f904aa2\" (UID: \"a4a14425fae56e81ed5745a9257a079d\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.781758 kubelet[2677]: I0510 00:06:23.781567 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.781758 kubelet[2677]: I0510 00:06:23.781589 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.781758 kubelet[2677]: I0510 00:06:23.781610 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.781758 kubelet[2677]: I0510 00:06:23.781637 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.782044 kubelet[2677]: I0510 00:06:23.781658 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.782044 kubelet[2677]: I0510 00:06:23.781683 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.782044 kubelet[2677]: I0510 00:06:23.781700 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d4370b56db256e1f5bbc13e94948d6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-025f904aa2\" (UID: \"3d4370b56db256e1f5bbc13e94948d6b\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.782044 kubelet[2677]: I0510 00:06:23.781716 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95a57406e572e7f4b554ff31157b2e51-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" (UID: \"95a57406e572e7f4b554ff31157b2e51\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:23.787614 kubelet[2677]: I0510 00:06:23.787500 2677 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:23.787614 kubelet[2677]: I0510 00:06:23.787591 2677 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-025f904aa2" May 10 00:06:23.970013 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:06:23.970374 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 10 00:06:24.419094 sudo[2710]: pam_unix(sudo:session): session closed for user root May 10 00:06:24.543416 kubelet[2677]: I0510 00:06:24.543302 2677 apiserver.go:52] "Watching apiserver" May 10 00:06:24.580667 kubelet[2677]: I0510 00:06:24.580598 2677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:06:24.645729 kubelet[2677]: E0510 00:06:24.645689 2677 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-025f904aa2\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" May 10 00:06:24.674632 kubelet[2677]: I0510 00:06:24.674497 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-025f904aa2" podStartSLOduration=1.674478103 podStartE2EDuration="1.674478103s" podCreationTimestamp="2025-05-10 00:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:24.663098449 +0000 UTC m=+1.194812386" watchObservedRunningTime="2025-05-10 00:06:24.674478103 +0000 UTC m=+1.206192040" May 10 00:06:24.696469 kubelet[2677]: I0510 00:06:24.696346 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-025f904aa2" podStartSLOduration=1.69632297 podStartE2EDuration="1.69632297s" podCreationTimestamp="2025-05-10 00:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:24.675828808 +0000 UTC m=+1.207542745" watchObservedRunningTime="2025-05-10 00:06:24.69632297 +0000 UTC m=+1.228036907" May 10 00:06:24.710643 kubelet[2677]: I0510 00:06:24.710554 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-025f904aa2" podStartSLOduration=2.710525229 podStartE2EDuration="2.710525229s" podCreationTimestamp="2025-05-10 00:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:24.697146256 +0000 UTC m=+1.228860233" watchObservedRunningTime="2025-05-10 00:06:24.710525229 +0000 UTC m=+1.242239206" May 10 00:06:26.137666 sudo[1850]: pam_unix(sudo:session): session closed for user root May 10 00:06:26.303088 sshd[1847]: pam_unix(sshd:session): session closed for user core May 10 00:06:26.309476 systemd[1]: sshd@6-138.199.169.250:22-147.75.109.163:34776.service: Deactivated successfully. May 10 00:06:26.312747 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:06:26.312955 systemd[1]: session-7.scope: Consumed 6.624s CPU time, 150.2M memory peak, 0B memory swap peak. May 10 00:06:26.313840 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. May 10 00:06:26.316034 systemd-logind[1457]: Removed session 7. May 10 00:06:29.958300 kubelet[2677]: I0510 00:06:29.958234 2677 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:06:29.959428 containerd[1477]: time="2025-05-10T00:06:29.959243769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:06:29.959999 kubelet[2677]: I0510 00:06:29.959970 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:06:30.798072 systemd[1]: Created slice kubepods-besteffort-poddadd715a_295d_48fd_aa34_c999d0a92a95.slice - libcontainer container kubepods-besteffort-poddadd715a_295d_48fd_aa34_c999d0a92a95.slice. May 10 00:06:30.820335 systemd[1]: Created slice kubepods-burstable-pod8352003d_04f6_4883_995f_fca74baf50b9.slice - libcontainer container kubepods-burstable-pod8352003d_04f6_4883_995f_fca74baf50b9.slice. May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827513 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-run\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827603 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-kernel\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827624 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dadd715a-295d-48fd-aa34-c999d0a92a95-kube-proxy\") pod \"kube-proxy-4n42z\" (UID: \"dadd715a-295d-48fd-aa34-c999d0a92a95\") " pod="kube-system/kube-proxy-4n42z" May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827640 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-cgroup\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827657 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-hubble-tls\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828056 kubelet[2677]: I0510 00:06:30.827684 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-etc-cni-netd\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828330 kubelet[2677]: I0510 00:06:30.827699 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8352003d-04f6-4883-995f-fca74baf50b9-clustermesh-secrets\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828330 kubelet[2677]: I0510 00:06:30.827713 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dadd715a-295d-48fd-aa34-c999d0a92a95-xtables-lock\") pod \"kube-proxy-4n42z\" (UID: \"dadd715a-295d-48fd-aa34-c999d0a92a95\") " pod="kube-system/kube-proxy-4n42z" May 10 00:06:30.828330 kubelet[2677]: I0510 00:06:30.827728 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dadd715a-295d-48fd-aa34-c999d0a92a95-lib-modules\") pod \"kube-proxy-4n42z\" (UID: \"dadd715a-295d-48fd-aa34-c999d0a92a95\") " pod="kube-system/kube-proxy-4n42z" May 10 00:06:30.828330 kubelet[2677]: I0510 00:06:30.827751 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-hostproc\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828330 kubelet[2677]: I0510 00:06:30.827769 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wpv2\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-kube-api-access-9wpv2\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827790 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvsd\" (UniqueName: \"kubernetes.io/projected/dadd715a-295d-48fd-aa34-c999d0a92a95-kube-api-access-ctvsd\") pod \"kube-proxy-4n42z\" (UID: \"dadd715a-295d-48fd-aa34-c999d0a92a95\") " pod="kube-system/kube-proxy-4n42z" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827805 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-bpf-maps\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827832 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cni-path\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827863 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8352003d-04f6-4883-995f-fca74baf50b9-cilium-config-path\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827886 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-lib-modules\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828479 kubelet[2677]: I0510 00:06:30.827917 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-xtables-lock\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:30.828661 kubelet[2677]: I0510 00:06:30.827942 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-net\") pod \"cilium-t246w\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " pod="kube-system/cilium-t246w" May 10 00:06:31.041270 systemd[1]: Created slice kubepods-besteffort-podcd72303e_16e5_4560_94d2_001e935839f5.slice - libcontainer container kubepods-besteffort-podcd72303e_16e5_4560_94d2_001e935839f5.slice. May 10 00:06:31.116892 containerd[1477]: time="2025-05-10T00:06:31.116765885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n42z,Uid:dadd715a-295d-48fd-aa34-c999d0a92a95,Namespace:kube-system,Attempt:0,}" May 10 00:06:31.125912 containerd[1477]: time="2025-05-10T00:06:31.125576501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t246w,Uid:8352003d-04f6-4883-995f-fca74baf50b9,Namespace:kube-system,Attempt:0,}" May 10 00:06:31.149346 containerd[1477]: time="2025-05-10T00:06:31.149139237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:31.149937 containerd[1477]: time="2025-05-10T00:06:31.149610589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:31.149937 containerd[1477]: time="2025-05-10T00:06:31.149639669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.150072 containerd[1477]: time="2025-05-10T00:06:31.149903344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.168954 containerd[1477]: time="2025-05-10T00:06:31.167669974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:31.170413 containerd[1477]: time="2025-05-10T00:06:31.170103454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:31.170413 containerd[1477]: time="2025-05-10T00:06:31.170130214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.170413 containerd[1477]: time="2025-05-10T00:06:31.170226572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.172990 systemd[1]: Started cri-containerd-34683c3e53c8bc1bbbcfef155288dda0fafd9956ffc286276a6aff7c52eb388d.scope - libcontainer container 34683c3e53c8bc1bbbcfef155288dda0fafd9956ffc286276a6aff7c52eb388d. May 10 00:06:31.193773 systemd[1]: Started cri-containerd-7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd.scope - libcontainer container 7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd. May 10 00:06:31.208122 containerd[1477]: time="2025-05-10T00:06:31.208081434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n42z,Uid:dadd715a-295d-48fd-aa34-c999d0a92a95,Namespace:kube-system,Attempt:0,} returns sandbox id \"34683c3e53c8bc1bbbcfef155288dda0fafd9956ffc286276a6aff7c52eb388d\"" May 10 00:06:31.217116 containerd[1477]: time="2025-05-10T00:06:31.217075567Z" level=info msg="CreateContainer within sandbox \"34683c3e53c8bc1bbbcfef155288dda0fafd9956ffc286276a6aff7c52eb388d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:06:31.231181 kubelet[2677]: I0510 00:06:31.230930 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvlx\" (UniqueName: \"kubernetes.io/projected/cd72303e-16e5-4560-94d2-001e935839f5-kube-api-access-sqvlx\") pod \"cilium-operator-5d85765b45-j4k2x\" (UID: \"cd72303e-16e5-4560-94d2-001e935839f5\") " pod="kube-system/cilium-operator-5d85765b45-j4k2x" May 10 00:06:31.231181 kubelet[2677]: I0510 00:06:31.231072 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd72303e-16e5-4560-94d2-001e935839f5-cilium-config-path\") pod \"cilium-operator-5d85765b45-j4k2x\" (UID: \"cd72303e-16e5-4560-94d2-001e935839f5\") " pod="kube-system/cilium-operator-5d85765b45-j4k2x" May 10 00:06:31.250288 containerd[1477]: time="2025-05-10T00:06:31.249985550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t246w,Uid:8352003d-04f6-4883-995f-fca74baf50b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\"" May 10 00:06:31.253059 containerd[1477]: time="2025-05-10T00:06:31.252798784Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:06:31.254981 containerd[1477]: time="2025-05-10T00:06:31.254775272Z" level=info msg="CreateContainer within sandbox \"34683c3e53c8bc1bbbcfef155288dda0fafd9956ffc286276a6aff7c52eb388d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5e0fafa76a8987ed0dd869ff965cfff31a096a0dcf005fe0d4d02310edc7aca\"" May 10 00:06:31.256783 containerd[1477]: time="2025-05-10T00:06:31.256482524Z" level=info msg="StartContainer for \"f5e0fafa76a8987ed0dd869ff965cfff31a096a0dcf005fe0d4d02310edc7aca\"" May 10 00:06:31.281615 systemd[1]: Started cri-containerd-f5e0fafa76a8987ed0dd869ff965cfff31a096a0dcf005fe0d4d02310edc7aca.scope - libcontainer container f5e0fafa76a8987ed0dd869ff965cfff31a096a0dcf005fe0d4d02310edc7aca. May 10 00:06:31.311877 containerd[1477]: time="2025-05-10T00:06:31.311788661Z" level=info msg="StartContainer for \"f5e0fafa76a8987ed0dd869ff965cfff31a096a0dcf005fe0d4d02310edc7aca\" returns successfully" May 10 00:06:31.647030 containerd[1477]: time="2025-05-10T00:06:31.646375398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4k2x,Uid:cd72303e-16e5-4560-94d2-001e935839f5,Namespace:kube-system,Attempt:0,}" May 10 00:06:31.686605 containerd[1477]: time="2025-05-10T00:06:31.686320746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:31.686605 containerd[1477]: time="2025-05-10T00:06:31.686465143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:31.686605 containerd[1477]: time="2025-05-10T00:06:31.686493223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.687159 containerd[1477]: time="2025-05-10T00:06:31.686959375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:31.709043 systemd[1]: Started cri-containerd-03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1.scope - libcontainer container 03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1. May 10 00:06:31.757064 containerd[1477]: time="2025-05-10T00:06:31.757012471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4k2x,Uid:cd72303e-16e5-4560-94d2-001e935839f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\"" May 10 00:06:32.390981 kubelet[2677]: I0510 00:06:32.390897 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n42z" podStartSLOduration=2.390877277 podStartE2EDuration="2.390877277s" podCreationTimestamp="2025-05-10 00:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:31.692488005 +0000 UTC m=+8.224201942" watchObservedRunningTime="2025-05-10 00:06:32.390877277 +0000 UTC m=+8.922591214" May 10 00:06:35.752055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836063273.mount: Deactivated successfully. May 10 00:06:37.152359 containerd[1477]: time="2025-05-10T00:06:37.152303266Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.154162 containerd[1477]: time="2025-05-10T00:06:37.154053787Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 10 00:06:37.155317 containerd[1477]: time="2025-05-10T00:06:37.155254828Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.157587 containerd[1477]: time="2025-05-10T00:06:37.157545030Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.904702166s" May 10 00:06:37.157828 containerd[1477]: time="2025-05-10T00:06:37.157701950Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 10 00:06:37.160678 containerd[1477]: time="2025-05-10T00:06:37.160541633Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:06:37.162699 containerd[1477]: time="2025-05-10T00:06:37.162661434Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:06:37.179570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659703067.mount: Deactivated successfully. May 10 00:06:37.186713 containerd[1477]: time="2025-05-10T00:06:37.186669973Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\"" May 10 00:06:37.187895 containerd[1477]: time="2025-05-10T00:06:37.187863574Z" level=info msg="StartContainer for \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\"" May 10 00:06:37.220619 systemd[1]: Started cri-containerd-6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48.scope - libcontainer container 6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48. May 10 00:06:37.247321 containerd[1477]: time="2025-05-10T00:06:37.247236581Z" level=info msg="StartContainer for \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\" returns successfully" May 10 00:06:37.265719 systemd[1]: cri-containerd-6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48.scope: Deactivated successfully. May 10 00:06:37.441202 containerd[1477]: time="2025-05-10T00:06:37.440954694Z" level=info msg="shim disconnected" id=6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48 namespace=k8s.io May 10 00:06:37.441202 containerd[1477]: time="2025-05-10T00:06:37.441046414Z" level=warning msg="cleaning up after shim disconnected" id=6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48 namespace=k8s.io May 10 00:06:37.441202 containerd[1477]: time="2025-05-10T00:06:37.441060814Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:37.681911 containerd[1477]: time="2025-05-10T00:06:37.681676484Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:06:37.699019 containerd[1477]: time="2025-05-10T00:06:37.698616697Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\"" May 10 00:06:37.701841 containerd[1477]: time="2025-05-10T00:06:37.700774819Z" level=info msg="StartContainer for \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\"" May 10 00:06:37.736645 systemd[1]: Started cri-containerd-2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b.scope - libcontainer container 2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b. May 10 00:06:37.776689 containerd[1477]: time="2025-05-10T00:06:37.776643919Z" level=info msg="StartContainer for \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\" returns successfully" May 10 00:06:37.790967 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:06:37.791452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:06:37.791662 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 10 00:06:37.801096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:06:37.801349 systemd[1]: cri-containerd-2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b.scope: Deactivated successfully. May 10 00:06:37.823599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:06:37.825681 containerd[1477]: time="2025-05-10T00:06:37.825614518Z" level=info msg="shim disconnected" id=2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b namespace=k8s.io May 10 00:06:37.825810 containerd[1477]: time="2025-05-10T00:06:37.825682638Z" level=warning msg="cleaning up after shim disconnected" id=2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b namespace=k8s.io May 10 00:06:37.825810 containerd[1477]: time="2025-05-10T00:06:37.825692198Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:38.177450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48-rootfs.mount: Deactivated successfully. May 10 00:06:38.687516 containerd[1477]: time="2025-05-10T00:06:38.686724907Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:06:38.710296 containerd[1477]: time="2025-05-10T00:06:38.710234946Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\"" May 10 00:06:38.712527 containerd[1477]: time="2025-05-10T00:06:38.711683590Z" level=info msg="StartContainer for \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\"" May 10 00:06:38.750078 systemd[1]: Started cri-containerd-9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f.scope - libcontainer container 9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f. May 10 00:06:38.782713 containerd[1477]: time="2025-05-10T00:06:38.782658507Z" level=info msg="StartContainer for \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\" returns successfully" May 10 00:06:38.789881 systemd[1]: cri-containerd-9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f.scope: Deactivated successfully. May 10 00:06:38.815076 containerd[1477]: time="2025-05-10T00:06:38.815003895Z" level=info msg="shim disconnected" id=9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f namespace=k8s.io May 10 00:06:38.815076 containerd[1477]: time="2025-05-10T00:06:38.815063135Z" level=warning msg="cleaning up after shim disconnected" id=9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f namespace=k8s.io May 10 00:06:38.815076 containerd[1477]: time="2025-05-10T00:06:38.815072896Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:39.174279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f-rootfs.mount: Deactivated successfully. May 10 00:06:39.696768 containerd[1477]: time="2025-05-10T00:06:39.696630429Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:06:39.723673 containerd[1477]: time="2025-05-10T00:06:39.723612666Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\"" May 10 00:06:39.724533 containerd[1477]: time="2025-05-10T00:06:39.724387991Z" level=info msg="StartContainer for \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\"" May 10 00:06:39.767619 systemd[1]: Started cri-containerd-beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4.scope - libcontainer container beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4. May 10 00:06:39.797919 systemd[1]: cri-containerd-beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4.scope: Deactivated successfully. May 10 00:06:39.800699 containerd[1477]: time="2025-05-10T00:06:39.800646914Z" level=info msg="StartContainer for \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\" returns successfully" May 10 00:06:39.829460 containerd[1477]: time="2025-05-10T00:06:39.829362360Z" level=info msg="shim disconnected" id=beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4 namespace=k8s.io May 10 00:06:39.829460 containerd[1477]: time="2025-05-10T00:06:39.829457401Z" level=warning msg="cleaning up after shim disconnected" id=beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4 namespace=k8s.io May 10 00:06:39.829460 containerd[1477]: time="2025-05-10T00:06:39.829468641Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:06:40.175007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4-rootfs.mount: Deactivated successfully. May 10 00:06:40.701001 containerd[1477]: time="2025-05-10T00:06:40.700278813Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:06:40.732235 containerd[1477]: time="2025-05-10T00:06:40.732191955Z" level=info msg="CreateContainer within sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\"" May 10 00:06:40.733622 containerd[1477]: time="2025-05-10T00:06:40.733582846Z" level=info msg="StartContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\"" May 10 00:06:40.768780 systemd[1]: Started cri-containerd-a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f.scope - libcontainer container a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f. May 10 00:06:40.800243 containerd[1477]: time="2025-05-10T00:06:40.800182992Z" level=info msg="StartContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" returns successfully" May 10 00:06:40.940518 kubelet[2677]: I0510 00:06:40.940484 2677 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:06:40.980679 systemd[1]: Created slice kubepods-burstable-pod1231ea23_f954_4855_8b2a_d46c4cd17c8e.slice - libcontainer container kubepods-burstable-pod1231ea23_f954_4855_8b2a_d46c4cd17c8e.slice. May 10 00:06:40.990854 systemd[1]: Created slice kubepods-burstable-pod26d8008f_c7bc_44bc_83dc_d959867c276e.slice - libcontainer container kubepods-burstable-pod26d8008f_c7bc_44bc_83dc_d959867c276e.slice. May 10 00:06:41.006637 kubelet[2677]: I0510 00:06:41.006452 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7gl\" (UniqueName: \"kubernetes.io/projected/1231ea23-f954-4855-8b2a-d46c4cd17c8e-kube-api-access-dw7gl\") pod \"coredns-6f6b679f8f-dx42v\" (UID: \"1231ea23-f954-4855-8b2a-d46c4cd17c8e\") " pod="kube-system/coredns-6f6b679f8f-dx42v" May 10 00:06:41.006637 kubelet[2677]: I0510 00:06:41.006499 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgr6\" (UniqueName: \"kubernetes.io/projected/26d8008f-c7bc-44bc-83dc-d959867c276e-kube-api-access-xlgr6\") pod \"coredns-6f6b679f8f-zfxcr\" (UID: \"26d8008f-c7bc-44bc-83dc-d959867c276e\") " pod="kube-system/coredns-6f6b679f8f-zfxcr" May 10 00:06:41.006637 kubelet[2677]: I0510 00:06:41.006535 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1231ea23-f954-4855-8b2a-d46c4cd17c8e-config-volume\") pod \"coredns-6f6b679f8f-dx42v\" (UID: \"1231ea23-f954-4855-8b2a-d46c4cd17c8e\") " pod="kube-system/coredns-6f6b679f8f-dx42v" May 10 00:06:41.006637 kubelet[2677]: I0510 00:06:41.006571 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26d8008f-c7bc-44bc-83dc-d959867c276e-config-volume\") pod \"coredns-6f6b679f8f-zfxcr\" (UID: \"26d8008f-c7bc-44bc-83dc-d959867c276e\") " pod="kube-system/coredns-6f6b679f8f-zfxcr" May 10 00:06:41.289094 containerd[1477]: time="2025-05-10T00:06:41.288419863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dx42v,Uid:1231ea23-f954-4855-8b2a-d46c4cd17c8e,Namespace:kube-system,Attempt:0,}" May 10 00:06:41.294134 containerd[1477]: time="2025-05-10T00:06:41.293857960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zfxcr,Uid:26d8008f-c7bc-44bc-83dc-d959867c276e,Namespace:kube-system,Attempt:0,}" May 10 00:06:41.731716 kubelet[2677]: I0510 00:06:41.731660 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t246w" podStartSLOduration=5.82351897 podStartE2EDuration="11.731635845s" podCreationTimestamp="2025-05-10 00:06:30 +0000 UTC" firstStartedPulling="2025-05-10 00:06:31.252015557 +0000 UTC m=+7.783729494" lastFinishedPulling="2025-05-10 00:06:37.160132352 +0000 UTC m=+13.691846369" observedRunningTime="2025-05-10 00:06:41.730811236 +0000 UTC m=+18.262525173" watchObservedRunningTime="2025-05-10 00:06:41.731635845 +0000 UTC m=+18.263349782" May 10 00:06:41.793524 containerd[1477]: time="2025-05-10T00:06:41.792975770Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.793524 containerd[1477]: time="2025-05-10T00:06:41.793491215Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 10 00:06:41.794544 containerd[1477]: time="2025-05-10T00:06:41.794507066Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:41.795967 containerd[1477]: time="2025-05-10T00:06:41.795928241Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.635346528s" May 10 00:06:41.796095 containerd[1477]: time="2025-05-10T00:06:41.796076603Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 10 00:06:41.799155 containerd[1477]: time="2025-05-10T00:06:41.799118675Z" level=info msg="CreateContainer within sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:06:41.823434 containerd[1477]: time="2025-05-10T00:06:41.823280289Z" level=info msg="CreateContainer within sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\"" May 10 00:06:41.824190 containerd[1477]: time="2025-05-10T00:06:41.823986656Z" level=info msg="StartContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\"" May 10 00:06:41.850710 systemd[1]: Started cri-containerd-8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2.scope - libcontainer container 8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2. May 10 00:06:41.880293 containerd[1477]: time="2025-05-10T00:06:41.880202047Z" level=info msg="StartContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" returns successfully" May 10 00:06:42.178565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331881601.mount: Deactivated successfully. May 10 00:06:42.722990 kubelet[2677]: I0510 00:06:42.722906 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j4k2x" podStartSLOduration=1.686226859 podStartE2EDuration="11.722890773s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="2025-05-10 00:06:31.760295618 +0000 UTC m=+8.292009555" lastFinishedPulling="2025-05-10 00:06:41.796959532 +0000 UTC m=+18.328673469" observedRunningTime="2025-05-10 00:06:42.722590729 +0000 UTC m=+19.254304706" watchObservedRunningTime="2025-05-10 00:06:42.722890773 +0000 UTC m=+19.254604710" May 10 00:06:45.980073 systemd-networkd[1377]: cilium_host: Link UP May 10 00:06:45.980197 systemd-networkd[1377]: cilium_net: Link UP May 10 00:06:45.980201 systemd-networkd[1377]: cilium_net: Gained carrier May 10 00:06:45.980322 systemd-networkd[1377]: cilium_host: Gained carrier May 10 00:06:45.982701 systemd-networkd[1377]: cilium_host: Gained IPv6LL May 10 00:06:46.087947 systemd-networkd[1377]: cilium_vxlan: Link UP May 10 00:06:46.087954 systemd-networkd[1377]: cilium_vxlan: Gained carrier May 10 00:06:46.376442 kernel: NET: Registered PF_ALG protocol family May 10 00:06:46.644732 systemd-networkd[1377]: cilium_net: Gained IPv6LL May 10 00:06:47.100623 systemd-networkd[1377]: lxc_health: Link UP May 10 00:06:47.109644 systemd-networkd[1377]: lxc_health: Gained carrier May 10 00:06:47.285646 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL May 10 00:06:47.379680 systemd-networkd[1377]: lxc1b5bdb3b455b: Link UP May 10 00:06:47.389033 kernel: eth0: renamed from tmp64d16 May 10 00:06:47.393458 systemd-networkd[1377]: lxc1b5bdb3b455b: Gained carrier May 10 00:06:47.409813 systemd-networkd[1377]: lxc038d32ed9ace: Link UP May 10 00:06:47.414530 kernel: eth0: renamed from tmp3496a May 10 00:06:47.417306 systemd-networkd[1377]: lxc038d32ed9ace: Gained carrier May 10 00:06:48.820781 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 10 00:06:49.332789 systemd-networkd[1377]: lxc1b5bdb3b455b: Gained IPv6LL May 10 00:06:49.397506 systemd-networkd[1377]: lxc038d32ed9ace: Gained IPv6LL May 10 00:06:51.439183 containerd[1477]: time="2025-05-10T00:06:51.438516844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:51.440043 containerd[1477]: time="2025-05-10T00:06:51.439661838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:51.440043 containerd[1477]: time="2025-05-10T00:06:51.439734360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:51.440379 containerd[1477]: time="2025-05-10T00:06:51.440248616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:51.453819 containerd[1477]: time="2025-05-10T00:06:51.453664259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:51.453819 containerd[1477]: time="2025-05-10T00:06:51.453734901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:51.453819 containerd[1477]: time="2025-05-10T00:06:51.453756662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:51.457888 containerd[1477]: time="2025-05-10T00:06:51.457760222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:51.476604 systemd[1]: Started cri-containerd-64d16da518505644cde15934977fac486718caeeca8c350736055ac7990ecc29.scope - libcontainer container 64d16da518505644cde15934977fac486718caeeca8c350736055ac7990ecc29. May 10 00:06:51.496607 systemd[1]: Started cri-containerd-3496aa70a87eebae6811e6f1b62e4250e62ebdcaf5b6df81d32178de883cdb3d.scope - libcontainer container 3496aa70a87eebae6811e6f1b62e4250e62ebdcaf5b6df81d32178de883cdb3d. May 10 00:06:51.540418 containerd[1477]: time="2025-05-10T00:06:51.540286983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zfxcr,Uid:26d8008f-c7bc-44bc-83dc-d959867c276e,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d16da518505644cde15934977fac486718caeeca8c350736055ac7990ecc29\"" May 10 00:06:51.548632 containerd[1477]: time="2025-05-10T00:06:51.547589962Z" level=info msg="CreateContainer within sandbox \"64d16da518505644cde15934977fac486718caeeca8c350736055ac7990ecc29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:06:51.576490 containerd[1477]: time="2025-05-10T00:06:51.576151701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dx42v,Uid:1231ea23-f954-4855-8b2a-d46c4cd17c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3496aa70a87eebae6811e6f1b62e4250e62ebdcaf5b6df81d32178de883cdb3d\"" May 10 00:06:51.585729 containerd[1477]: time="2025-05-10T00:06:51.585653906Z" level=info msg="CreateContainer within sandbox \"3496aa70a87eebae6811e6f1b62e4250e62ebdcaf5b6df81d32178de883cdb3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:06:51.587335 containerd[1477]: time="2025-05-10T00:06:51.587047388Z" level=info msg="CreateContainer within sandbox \"64d16da518505644cde15934977fac486718caeeca8c350736055ac7990ecc29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e7f0f02eed09ffb5828d7edc23368fa84ec86ad383f8becfa979c4e2396ee56\"" May 10 00:06:51.588874 containerd[1477]: time="2025-05-10T00:06:51.588709478Z" level=info msg="StartContainer for \"0e7f0f02eed09ffb5828d7edc23368fa84ec86ad383f8becfa979c4e2396ee56\"" May 10 00:06:51.613762 containerd[1477]: time="2025-05-10T00:06:51.613714990Z" level=info msg="CreateContainer within sandbox \"3496aa70a87eebae6811e6f1b62e4250e62ebdcaf5b6df81d32178de883cdb3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d97c6a8f9a127c10958d7aec2fb628a9841c45b9a3fd7530eae69242849a13ac\"" May 10 00:06:51.615005 containerd[1477]: time="2025-05-10T00:06:51.614753261Z" level=info msg="StartContainer for \"d97c6a8f9a127c10958d7aec2fb628a9841c45b9a3fd7530eae69242849a13ac\"" May 10 00:06:51.632772 systemd[1]: Started cri-containerd-0e7f0f02eed09ffb5828d7edc23368fa84ec86ad383f8becfa979c4e2396ee56.scope - libcontainer container 0e7f0f02eed09ffb5828d7edc23368fa84ec86ad383f8becfa979c4e2396ee56. May 10 00:06:51.657582 systemd[1]: Started cri-containerd-d97c6a8f9a127c10958d7aec2fb628a9841c45b9a3fd7530eae69242849a13ac.scope - libcontainer container d97c6a8f9a127c10958d7aec2fb628a9841c45b9a3fd7530eae69242849a13ac. May 10 00:06:51.691577 containerd[1477]: time="2025-05-10T00:06:51.689841158Z" level=info msg="StartContainer for \"0e7f0f02eed09ffb5828d7edc23368fa84ec86ad383f8becfa979c4e2396ee56\" returns successfully" May 10 00:06:51.705712 containerd[1477]: time="2025-05-10T00:06:51.705653034Z" level=info msg="StartContainer for \"d97c6a8f9a127c10958d7aec2fb628a9841c45b9a3fd7530eae69242849a13ac\" returns successfully" May 10 00:06:51.754664 kubelet[2677]: I0510 00:06:51.754472 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zfxcr" podStartSLOduration=20.754453941 podStartE2EDuration="20.754453941s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:51.754173372 +0000 UTC m=+28.285887309" watchObservedRunningTime="2025-05-10 00:06:51.754453941 +0000 UTC m=+28.286167878" May 10 00:06:51.777506 kubelet[2677]: I0510 00:06:51.776223 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dx42v" podStartSLOduration=20.776200554 podStartE2EDuration="20.776200554s" podCreationTimestamp="2025-05-10 00:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:51.774372419 +0000 UTC m=+28.306086356" watchObservedRunningTime="2025-05-10 00:06:51.776200554 +0000 UTC m=+28.307914491" May 10 00:07:14.909464 update_engine[1458]: I20250510 00:07:14.909042 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 10 00:07:14.909464 update_engine[1458]: I20250510 00:07:14.909128 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 10 00:07:14.910569 update_engine[1458]: I20250510 00:07:14.909585 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 10 00:07:14.910569 update_engine[1458]: I20250510 00:07:14.910326 1458 omaha_request_params.cc:62] Current group set to lts May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.910933 1458 update_attempter.cc:499] Already updated boot flags. Skipping. May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.910984 1458 update_attempter.cc:643] Scheduling an action processor start. May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.911019 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.911082 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.911208 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.911226 1458 omaha_request_action.cc:272] Request: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: May 10 00:07:14.911234 update_engine[1458]: I20250510 00:07:14.911238 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:14.912244 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 10 00:07:14.914276 update_engine[1458]: I20250510 00:07:14.914194 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:14.914858 update_engine[1458]: I20250510 00:07:14.914788 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:14.916040 update_engine[1458]: E20250510 00:07:14.915992 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:14.916119 update_engine[1458]: I20250510 00:07:14.916074 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 10 00:07:24.891270 update_engine[1458]: I20250510 00:07:24.891119 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:24.891846 update_engine[1458]: I20250510 00:07:24.891585 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:24.891955 update_engine[1458]: I20250510 00:07:24.891901 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:24.893138 update_engine[1458]: E20250510 00:07:24.892959 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:24.893138 update_engine[1458]: I20250510 00:07:24.893054 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 10 00:07:34.895870 update_engine[1458]: I20250510 00:07:34.895740 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:34.896784 update_engine[1458]: I20250510 00:07:34.896051 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:34.896784 update_engine[1458]: I20250510 00:07:34.896304 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:34.897241 update_engine[1458]: E20250510 00:07:34.897167 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:34.897342 update_engine[1458]: I20250510 00:07:34.897258 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 10 00:07:44.895753 update_engine[1458]: I20250510 00:07:44.895591 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:44.896170 update_engine[1458]: I20250510 00:07:44.895929 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:44.896170 update_engine[1458]: I20250510 00:07:44.896142 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:44.897821 update_engine[1458]: E20250510 00:07:44.897630 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:44.897821 update_engine[1458]: I20250510 00:07:44.897717 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:07:44.897821 update_engine[1458]: I20250510 00:07:44.897727 1458 omaha_request_action.cc:617] Omaha request response: May 10 00:07:44.897821 update_engine[1458]: E20250510 00:07:44.897820 1458 omaha_request_action.cc:636] Omaha request network transfer failed. May 10 00:07:44.897821 update_engine[1458]: I20250510 00:07:44.897840 1458 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897847 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897852 1458 update_attempter.cc:306] Processing Done. May 10 00:07:44.898199 update_engine[1458]: E20250510 00:07:44.897868 1458 update_attempter.cc:619] Update failed. May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897873 1458 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897878 1458 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897884 1458 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897955 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897979 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897984 1458 omaha_request_action.cc:272] Request: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.897989 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 10 00:07:44.898199 update_engine[1458]: I20250510 00:07:44.898135 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.898315 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 10 00:07:44.899209 update_engine[1458]: E20250510 00:07:44.899124 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899188 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899195 1458 omaha_request_action.cc:617] Omaha request response: May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899202 1458 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899209 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899215 1458 update_attempter.cc:306] Processing Done. May 10 00:07:44.899209 update_engine[1458]: I20250510 00:07:44.899222 1458 update_attempter.cc:310] Error event sent. May 10 00:07:44.899625 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 10 00:07:44.899625 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 10 00:07:44.900085 update_engine[1458]: I20250510 00:07:44.899232 1458 update_check_scheduler.cc:74] Next update check in 46m45s May 10 00:08:46.882876 systemd[1]: Started sshd@8-138.199.169.250:22-80.94.95.115:41196.service - OpenSSH per-connection server daemon (80.94.95.115:41196). May 10 00:08:48.073237 sshd[4079]: Invalid user Sujan from 80.94.95.115 port 41196 May 10 00:08:48.113574 sshd[4079]: Connection closed by invalid user Sujan 80.94.95.115 port 41196 [preauth] May 10 00:08:48.117028 systemd[1]: sshd@8-138.199.169.250:22-80.94.95.115:41196.service: Deactivated successfully. May 10 00:11:03.475870 systemd[1]: Started sshd@9-138.199.169.250:22-147.75.109.163:33064.service - OpenSSH per-connection server daemon (147.75.109.163:33064). May 10 00:11:04.483447 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 33064 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:04.485217 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:04.495043 systemd-logind[1457]: New session 8 of user core. May 10 00:11:04.500615 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:11:05.258764 sshd[4104]: pam_unix(sshd:session): session closed for user core May 10 00:11:05.263567 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. May 10 00:11:05.264505 systemd[1]: sshd@9-138.199.169.250:22-147.75.109.163:33064.service: Deactivated successfully. May 10 00:11:05.268233 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:11:05.269636 systemd-logind[1457]: Removed session 8. May 10 00:11:10.434725 systemd[1]: Started sshd@10-138.199.169.250:22-147.75.109.163:45426.service - OpenSSH per-connection server daemon (147.75.109.163:45426). May 10 00:11:11.439232 sshd[4118]: Accepted publickey for core from 147.75.109.163 port 45426 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:11.441538 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:11.448756 systemd-logind[1457]: New session 9 of user core. May 10 00:11:11.453626 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:11:12.231886 sshd[4118]: pam_unix(sshd:session): session closed for user core May 10 00:11:12.237749 systemd[1]: sshd@10-138.199.169.250:22-147.75.109.163:45426.service: Deactivated successfully. May 10 00:11:12.240795 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:11:12.241987 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. May 10 00:11:12.245909 systemd-logind[1457]: Removed session 9. May 10 00:11:17.418790 systemd[1]: Started sshd@11-138.199.169.250:22-147.75.109.163:55244.service - OpenSSH per-connection server daemon (147.75.109.163:55244). May 10 00:11:18.429080 sshd[4132]: Accepted publickey for core from 147.75.109.163 port 55244 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:18.431378 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:18.437241 systemd-logind[1457]: New session 10 of user core. May 10 00:11:18.443631 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:11:19.207984 sshd[4132]: pam_unix(sshd:session): session closed for user core May 10 00:11:19.212772 systemd[1]: sshd@11-138.199.169.250:22-147.75.109.163:55244.service: Deactivated successfully. May 10 00:11:19.216373 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:11:19.219131 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. May 10 00:11:19.220508 systemd-logind[1457]: Removed session 10. May 10 00:11:24.389445 systemd[1]: Started sshd@12-138.199.169.250:22-147.75.109.163:55248.service - OpenSSH per-connection server daemon (147.75.109.163:55248). May 10 00:11:25.400009 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 55248 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:25.403910 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:25.411778 systemd-logind[1457]: New session 11 of user core. May 10 00:11:25.420736 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:11:26.176288 sshd[4148]: pam_unix(sshd:session): session closed for user core May 10 00:11:26.182291 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. May 10 00:11:26.183211 systemd[1]: sshd@12-138.199.169.250:22-147.75.109.163:55248.service: Deactivated successfully. May 10 00:11:26.187623 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:11:26.188786 systemd-logind[1457]: Removed session 11. May 10 00:11:31.357934 systemd[1]: Started sshd@13-138.199.169.250:22-147.75.109.163:37314.service - OpenSSH per-connection server daemon (147.75.109.163:37314). May 10 00:11:32.356098 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 37314 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:32.358940 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:32.366776 systemd-logind[1457]: New session 12 of user core. May 10 00:11:32.374714 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:11:33.118457 sshd[4161]: pam_unix(sshd:session): session closed for user core May 10 00:11:33.124659 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. May 10 00:11:33.125601 systemd[1]: sshd@13-138.199.169.250:22-147.75.109.163:37314.service: Deactivated successfully. May 10 00:11:33.128580 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:11:33.129986 systemd-logind[1457]: Removed session 12. May 10 00:11:33.306902 systemd[1]: Started sshd@14-138.199.169.250:22-147.75.109.163:37318.service - OpenSSH per-connection server daemon (147.75.109.163:37318). May 10 00:11:34.319428 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 37318 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:34.321595 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:34.326951 systemd-logind[1457]: New session 13 of user core. May 10 00:11:34.335603 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:11:35.139045 sshd[4177]: pam_unix(sshd:session): session closed for user core May 10 00:11:35.143733 systemd[1]: sshd@14-138.199.169.250:22-147.75.109.163:37318.service: Deactivated successfully. May 10 00:11:35.147157 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:11:35.149912 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. May 10 00:11:35.151494 systemd-logind[1457]: Removed session 13. May 10 00:11:35.318829 systemd[1]: Started sshd@15-138.199.169.250:22-147.75.109.163:37328.service - OpenSSH per-connection server daemon (147.75.109.163:37328). May 10 00:11:36.349199 sshd[4188]: Accepted publickey for core from 147.75.109.163 port 37328 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:36.351651 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:36.357363 systemd-logind[1457]: New session 14 of user core. May 10 00:11:36.365735 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:11:37.131542 sshd[4188]: pam_unix(sshd:session): session closed for user core May 10 00:11:37.136106 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. May 10 00:11:37.136375 systemd[1]: sshd@15-138.199.169.250:22-147.75.109.163:37328.service: Deactivated successfully. May 10 00:11:37.139535 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:11:37.142034 systemd-logind[1457]: Removed session 14. May 10 00:11:42.311832 systemd[1]: Started sshd@16-138.199.169.250:22-147.75.109.163:45798.service - OpenSSH per-connection server daemon (147.75.109.163:45798). May 10 00:11:43.332740 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 45798 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:43.336578 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:43.341645 systemd-logind[1457]: New session 15 of user core. May 10 00:11:43.351733 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:11:44.113696 sshd[4201]: pam_unix(sshd:session): session closed for user core May 10 00:11:44.119220 systemd[1]: sshd@16-138.199.169.250:22-147.75.109.163:45798.service: Deactivated successfully. May 10 00:11:44.123238 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:11:44.124823 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. May 10 00:11:44.126057 systemd-logind[1457]: Removed session 15. May 10 00:11:44.299965 systemd[1]: Started sshd@17-138.199.169.250:22-147.75.109.163:45808.service - OpenSSH per-connection server daemon (147.75.109.163:45808). May 10 00:11:45.310080 sshd[4214]: Accepted publickey for core from 147.75.109.163 port 45808 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:45.312041 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:45.318014 systemd-logind[1457]: New session 16 of user core. May 10 00:11:45.325695 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:11:46.135485 sshd[4214]: pam_unix(sshd:session): session closed for user core May 10 00:11:46.142248 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. May 10 00:11:46.143081 systemd[1]: sshd@17-138.199.169.250:22-147.75.109.163:45808.service: Deactivated successfully. May 10 00:11:46.145361 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:11:46.146824 systemd-logind[1457]: Removed session 16. May 10 00:11:46.320314 systemd[1]: Started sshd@18-138.199.169.250:22-147.75.109.163:45810.service - OpenSSH per-connection server daemon (147.75.109.163:45810). May 10 00:11:47.334381 sshd[4224]: Accepted publickey for core from 147.75.109.163 port 45810 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:47.337195 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:47.344837 systemd-logind[1457]: New session 17 of user core. May 10 00:11:47.352670 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:11:49.646900 sshd[4224]: pam_unix(sshd:session): session closed for user core May 10 00:11:49.651994 systemd[1]: sshd@18-138.199.169.250:22-147.75.109.163:45810.service: Deactivated successfully. May 10 00:11:49.655770 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:11:49.656922 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. May 10 00:11:49.658526 systemd-logind[1457]: Removed session 17. May 10 00:11:49.835881 systemd[1]: Started sshd@19-138.199.169.250:22-147.75.109.163:34756.service - OpenSSH per-connection server daemon (147.75.109.163:34756). May 10 00:11:50.843618 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 34756 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:50.845705 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:50.850653 systemd-logind[1457]: New session 18 of user core. May 10 00:11:50.857685 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:11:51.748735 sshd[4244]: pam_unix(sshd:session): session closed for user core May 10 00:11:51.752708 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. May 10 00:11:51.753319 systemd[1]: sshd@19-138.199.169.250:22-147.75.109.163:34756.service: Deactivated successfully. May 10 00:11:51.755859 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:11:51.759562 systemd-logind[1457]: Removed session 18. May 10 00:11:51.930943 systemd[1]: Started sshd@20-138.199.169.250:22-147.75.109.163:34758.service - OpenSSH per-connection server daemon (147.75.109.163:34758). May 10 00:11:52.941922 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 34758 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:52.944081 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:52.949960 systemd-logind[1457]: New session 19 of user core. May 10 00:11:52.954632 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:11:53.711521 sshd[4254]: pam_unix(sshd:session): session closed for user core May 10 00:11:53.715562 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. May 10 00:11:53.716168 systemd[1]: sshd@20-138.199.169.250:22-147.75.109.163:34758.service: Deactivated successfully. May 10 00:11:53.719203 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:11:53.720547 systemd-logind[1457]: Removed session 19. May 10 00:11:58.888774 systemd[1]: Started sshd@21-138.199.169.250:22-147.75.109.163:57894.service - OpenSSH per-connection server daemon (147.75.109.163:57894). May 10 00:11:59.881957 sshd[4272]: Accepted publickey for core from 147.75.109.163 port 57894 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:11:59.883830 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:11:59.890085 systemd-logind[1457]: New session 20 of user core. May 10 00:11:59.895942 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:12:00.642769 sshd[4272]: pam_unix(sshd:session): session closed for user core May 10 00:12:00.646953 systemd[1]: sshd@21-138.199.169.250:22-147.75.109.163:57894.service: Deactivated successfully. May 10 00:12:00.649845 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:12:00.653089 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. May 10 00:12:00.654579 systemd-logind[1457]: Removed session 20. May 10 00:12:05.831905 systemd[1]: Started sshd@22-138.199.169.250:22-147.75.109.163:57908.service - OpenSSH per-connection server daemon (147.75.109.163:57908). May 10 00:12:06.844440 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 57908 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:12:06.846729 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:12:06.852154 systemd-logind[1457]: New session 21 of user core. May 10 00:12:06.858766 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:12:07.622925 sshd[4287]: pam_unix(sshd:session): session closed for user core May 10 00:12:07.628004 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. May 10 00:12:07.628599 systemd[1]: sshd@22-138.199.169.250:22-147.75.109.163:57908.service: Deactivated successfully. May 10 00:12:07.630918 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:12:07.633179 systemd-logind[1457]: Removed session 21. May 10 00:12:07.793568 systemd[1]: Started sshd@23-138.199.169.250:22-147.75.109.163:38334.service - OpenSSH per-connection server daemon (147.75.109.163:38334). May 10 00:12:08.803536 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 38334 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:12:08.807014 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:12:08.813765 systemd-logind[1457]: New session 22 of user core. May 10 00:12:08.816968 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:12:10.728087 containerd[1477]: time="2025-05-10T00:12:10.727510110Z" level=info msg="StopContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" with timeout 30 (s)" May 10 00:12:10.728087 containerd[1477]: time="2025-05-10T00:12:10.727898853Z" level=info msg="Stop container \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" with signal terminated" May 10 00:12:10.742742 containerd[1477]: time="2025-05-10T00:12:10.742242655Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:12:10.744949 systemd[1]: cri-containerd-8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2.scope: Deactivated successfully. May 10 00:12:10.753531 containerd[1477]: time="2025-05-10T00:12:10.753306145Z" level=info msg="StopContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" with timeout 2 (s)" May 10 00:12:10.754174 containerd[1477]: time="2025-05-10T00:12:10.754019867Z" level=info msg="Stop container \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" with signal terminated" May 10 00:12:10.767500 systemd-networkd[1377]: lxc_health: Link DOWN May 10 00:12:10.767507 systemd-networkd[1377]: lxc_health: Lost carrier May 10 00:12:10.790949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2-rootfs.mount: Deactivated successfully. May 10 00:12:10.792816 systemd[1]: cri-containerd-a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f.scope: Deactivated successfully. May 10 00:12:10.793055 systemd[1]: cri-containerd-a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f.scope: Consumed 7.950s CPU time. May 10 00:12:10.805071 containerd[1477]: time="2025-05-10T00:12:10.804845091Z" level=info msg="shim disconnected" id=8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2 namespace=k8s.io May 10 00:12:10.805071 containerd[1477]: time="2025-05-10T00:12:10.804899934Z" level=warning msg="cleaning up after shim disconnected" id=8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2 namespace=k8s.io May 10 00:12:10.805071 containerd[1477]: time="2025-05-10T00:12:10.804910014Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:10.820130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f-rootfs.mount: Deactivated successfully. May 10 00:12:10.825116 containerd[1477]: time="2025-05-10T00:12:10.824922309Z" level=info msg="shim disconnected" id=a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f namespace=k8s.io May 10 00:12:10.825116 containerd[1477]: time="2025-05-10T00:12:10.824987873Z" level=warning msg="cleaning up after shim disconnected" id=a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f namespace=k8s.io May 10 00:12:10.825116 containerd[1477]: time="2025-05-10T00:12:10.824995874Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:10.833748 containerd[1477]: time="2025-05-10T00:12:10.833696224Z" level=info msg="StopContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" returns successfully" May 10 00:12:10.834353 containerd[1477]: time="2025-05-10T00:12:10.834324781Z" level=info msg="StopPodSandbox for \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\"" May 10 00:12:10.834638 containerd[1477]: time="2025-05-10T00:12:10.834498591Z" level=info msg="Container to stop \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.837069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1-shm.mount: Deactivated successfully. May 10 00:12:10.844673 systemd[1]: cri-containerd-03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1.scope: Deactivated successfully. May 10 00:12:10.852597 containerd[1477]: time="2025-05-10T00:12:10.852513769Z" level=info msg="StopContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" returns successfully" May 10 00:12:10.855256 containerd[1477]: time="2025-05-10T00:12:10.855217568Z" level=info msg="StopPodSandbox for \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\"" May 10 00:12:10.855353 containerd[1477]: time="2025-05-10T00:12:10.855275771Z" level=info msg="Container to stop \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.855353 containerd[1477]: time="2025-05-10T00:12:10.855289492Z" level=info msg="Container to stop \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.855353 containerd[1477]: time="2025-05-10T00:12:10.855299053Z" level=info msg="Container to stop \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.855353 containerd[1477]: time="2025-05-10T00:12:10.855308373Z" level=info msg="Container to stop \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.855353 containerd[1477]: time="2025-05-10T00:12:10.855317454Z" level=info msg="Container to stop \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:12:10.858777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd-shm.mount: Deactivated successfully. May 10 00:12:10.868597 systemd[1]: cri-containerd-7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd.scope: Deactivated successfully. May 10 00:12:10.898976 containerd[1477]: time="2025-05-10T00:12:10.898916773Z" level=info msg="shim disconnected" id=03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1 namespace=k8s.io May 10 00:12:10.898976 containerd[1477]: time="2025-05-10T00:12:10.898975057Z" level=warning msg="cleaning up after shim disconnected" id=03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1 namespace=k8s.io May 10 00:12:10.898976 containerd[1477]: time="2025-05-10T00:12:10.898984977Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:10.901291 containerd[1477]: time="2025-05-10T00:12:10.898783045Z" level=info msg="shim disconnected" id=7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd namespace=k8s.io May 10 00:12:10.901291 containerd[1477]: time="2025-05-10T00:12:10.899792465Z" level=warning msg="cleaning up after shim disconnected" id=7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd namespace=k8s.io May 10 00:12:10.901291 containerd[1477]: time="2025-05-10T00:12:10.899804105Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:10.914174 containerd[1477]: time="2025-05-10T00:12:10.914132267Z" level=info msg="TearDown network for sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" successfully" May 10 00:12:10.914347 containerd[1477]: time="2025-05-10T00:12:10.914330878Z" level=info msg="StopPodSandbox for \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" returns successfully" May 10 00:12:10.915643 containerd[1477]: time="2025-05-10T00:12:10.915596873Z" level=info msg="TearDown network for sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" successfully" May 10 00:12:10.915643 containerd[1477]: time="2025-05-10T00:12:10.915630795Z" level=info msg="StopPodSandbox for \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" returns successfully" May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055332 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-net\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055411 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-hostproc\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055439 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-xtables-lock\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055464 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-etc-cni-netd\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055485 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-lib-modules\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.055526 kubelet[2677]: I0510 00:12:11.055523 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqvlx\" (UniqueName: \"kubernetes.io/projected/cd72303e-16e5-4560-94d2-001e935839f5-kube-api-access-sqvlx\") pod \"cd72303e-16e5-4560-94d2-001e935839f5\" (UID: \"cd72303e-16e5-4560-94d2-001e935839f5\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055551 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cni-path\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055571 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-bpf-maps\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055596 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8352003d-04f6-4883-995f-fca74baf50b9-clustermesh-secrets\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055621 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wpv2\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-kube-api-access-9wpv2\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055649 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd72303e-16e5-4560-94d2-001e935839f5-cilium-config-path\") pod \"cd72303e-16e5-4560-94d2-001e935839f5\" (UID: \"cd72303e-16e5-4560-94d2-001e935839f5\") " May 10 00:12:11.056436 kubelet[2677]: I0510 00:12:11.055672 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-cgroup\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.057074 kubelet[2677]: I0510 00:12:11.055696 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-hubble-tls\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.057074 kubelet[2677]: I0510 00:12:11.055747 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8352003d-04f6-4883-995f-fca74baf50b9-cilium-config-path\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.057074 kubelet[2677]: I0510 00:12:11.055769 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-run\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.057074 kubelet[2677]: I0510 00:12:11.055791 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-kernel\") pod \"8352003d-04f6-4883-995f-fca74baf50b9\" (UID: \"8352003d-04f6-4883-995f-fca74baf50b9\") " May 10 00:12:11.057074 kubelet[2677]: I0510 00:12:11.055875 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.057355 kubelet[2677]: I0510 00:12:11.055926 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.057355 kubelet[2677]: I0510 00:12:11.055948 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.057355 kubelet[2677]: I0510 00:12:11.055966 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.057355 kubelet[2677]: I0510 00:12:11.055986 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.058610 kubelet[2677]: I0510 00:12:11.057653 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.060415 kubelet[2677]: I0510 00:12:11.060284 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.062035 kubelet[2677]: I0510 00:12:11.061914 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd72303e-16e5-4560-94d2-001e935839f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd72303e-16e5-4560-94d2-001e935839f5" (UID: "cd72303e-16e5-4560-94d2-001e935839f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:12:11.062035 kubelet[2677]: I0510 00:12:11.061987 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.062035 kubelet[2677]: I0510 00:12:11.062004 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.063906 kubelet[2677]: I0510 00:12:11.063821 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd72303e-16e5-4560-94d2-001e935839f5-kube-api-access-sqvlx" (OuterVolumeSpecName: "kube-api-access-sqvlx") pod "cd72303e-16e5-4560-94d2-001e935839f5" (UID: "cd72303e-16e5-4560-94d2-001e935839f5"). InnerVolumeSpecName "kube-api-access-sqvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:12:11.063906 kubelet[2677]: I0510 00:12:11.063879 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:12:11.065644 kubelet[2677]: I0510 00:12:11.065578 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-kube-api-access-9wpv2" (OuterVolumeSpecName: "kube-api-access-9wpv2") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "kube-api-access-9wpv2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:12:11.069008 kubelet[2677]: I0510 00:12:11.068930 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8352003d-04f6-4883-995f-fca74baf50b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:12:11.069917 kubelet[2677]: I0510 00:12:11.069787 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8352003d-04f6-4883-995f-fca74baf50b9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:12:11.070164 kubelet[2677]: I0510 00:12:11.070124 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8352003d-04f6-4883-995f-fca74baf50b9" (UID: "8352003d-04f6-4883-995f-fca74baf50b9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156441 2677 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-hubble-tls\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156475 2677 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-cgroup\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156485 2677 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156494 2677 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8352003d-04f6-4883-995f-fca74baf50b9-cilium-config-path\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156503 2677 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cilium-run\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156519 2677 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-hostproc\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156527 2677 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-host-proc-sys-net\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.156551 kubelet[2677]: I0510 00:12:11.156535 2677 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-etc-cni-netd\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156543 2677 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-lib-modules\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156553 2677 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-xtables-lock\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156567 2677 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-cni-path\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156575 2677 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sqvlx\" (UniqueName: \"kubernetes.io/projected/cd72303e-16e5-4560-94d2-001e935839f5-kube-api-access-sqvlx\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156583 2677 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8352003d-04f6-4883-995f-fca74baf50b9-clustermesh-secrets\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156591 2677 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8352003d-04f6-4883-995f-fca74baf50b9-bpf-maps\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156599 2677 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9wpv2\" (UniqueName: \"kubernetes.io/projected/8352003d-04f6-4883-995f-fca74baf50b9-kube-api-access-9wpv2\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.157115 kubelet[2677]: I0510 00:12:11.156611 2677 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd72303e-16e5-4560-94d2-001e935839f5-cilium-config-path\") on node \"ci-4081-3-3-n-025f904aa2\" DevicePath \"\"" May 10 00:12:11.565753 kubelet[2677]: I0510 00:12:11.565707 2677 scope.go:117] "RemoveContainer" containerID="a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f" May 10 00:12:11.568928 containerd[1477]: time="2025-05-10T00:12:11.568521320Z" level=info msg="RemoveContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\"" May 10 00:12:11.577809 systemd[1]: Removed slice kubepods-burstable-pod8352003d_04f6_4883_995f_fca74baf50b9.slice - libcontainer container kubepods-burstable-pod8352003d_04f6_4883_995f_fca74baf50b9.slice. May 10 00:12:11.577938 systemd[1]: kubepods-burstable-pod8352003d_04f6_4883_995f_fca74baf50b9.slice: Consumed 8.037s CPU time. May 10 00:12:11.578871 containerd[1477]: time="2025-05-10T00:12:11.578835086Z" level=info msg="RemoveContainer for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" returns successfully" May 10 00:12:11.579526 kubelet[2677]: I0510 00:12:11.579368 2677 scope.go:117] "RemoveContainer" containerID="beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4" May 10 00:12:11.581190 systemd[1]: Removed slice kubepods-besteffort-podcd72303e_16e5_4560_94d2_001e935839f5.slice - libcontainer container kubepods-besteffort-podcd72303e_16e5_4560_94d2_001e935839f5.slice. May 10 00:12:11.583948 containerd[1477]: time="2025-05-10T00:12:11.583899143Z" level=info msg="RemoveContainer for \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\"" May 10 00:12:11.589270 containerd[1477]: time="2025-05-10T00:12:11.589200815Z" level=info msg="RemoveContainer for \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\" returns successfully" May 10 00:12:11.589494 kubelet[2677]: I0510 00:12:11.589464 2677 scope.go:117] "RemoveContainer" containerID="9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f" May 10 00:12:11.592579 containerd[1477]: time="2025-05-10T00:12:11.592126907Z" level=info msg="RemoveContainer for \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\"" May 10 00:12:11.599356 containerd[1477]: time="2025-05-10T00:12:11.599282647Z" level=info msg="RemoveContainer for \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\" returns successfully" May 10 00:12:11.600563 kubelet[2677]: I0510 00:12:11.599753 2677 scope.go:117] "RemoveContainer" containerID="2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b" May 10 00:12:11.603194 containerd[1477]: time="2025-05-10T00:12:11.603161195Z" level=info msg="RemoveContainer for \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\"" May 10 00:12:11.606618 containerd[1477]: time="2025-05-10T00:12:11.606583677Z" level=info msg="RemoveContainer for \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\" returns successfully" May 10 00:12:11.606793 kubelet[2677]: I0510 00:12:11.606770 2677 scope.go:117] "RemoveContainer" containerID="6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48" May 10 00:12:11.608935 containerd[1477]: time="2025-05-10T00:12:11.608901493Z" level=info msg="RemoveContainer for \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\"" May 10 00:12:11.613723 containerd[1477]: time="2025-05-10T00:12:11.613646892Z" level=info msg="RemoveContainer for \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\" returns successfully" May 10 00:12:11.614123 kubelet[2677]: I0510 00:12:11.613862 2677 scope.go:117] "RemoveContainer" containerID="a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f" May 10 00:12:11.614838 containerd[1477]: time="2025-05-10T00:12:11.614791319Z" level=error msg="ContainerStatus for \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\": not found" May 10 00:12:11.615388 kubelet[2677]: E0510 00:12:11.615340 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\": not found" containerID="a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f" May 10 00:12:11.615485 kubelet[2677]: I0510 00:12:11.615385 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f"} err="failed to get container status \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a823fed2ca880b655edeb3017fbc58b7be1da9641a7b209675473220ed84e57f\": not found" May 10 00:12:11.615515 kubelet[2677]: I0510 00:12:11.615487 2677 scope.go:117] "RemoveContainer" containerID="beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4" May 10 00:12:11.616129 containerd[1477]: time="2025-05-10T00:12:11.616079875Z" level=error msg="ContainerStatus for \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\": not found" May 10 00:12:11.616305 kubelet[2677]: E0510 00:12:11.616276 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\": not found" containerID="beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4" May 10 00:12:11.616343 kubelet[2677]: I0510 00:12:11.616311 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4"} err="failed to get container status \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"beb1dbfcf4e2ed589a0cf92300093b78a5e0bac172401783ec4b2aff5bfd97c4\": not found" May 10 00:12:11.616343 kubelet[2677]: I0510 00:12:11.616331 2677 scope.go:117] "RemoveContainer" containerID="9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f" May 10 00:12:11.616556 containerd[1477]: time="2025-05-10T00:12:11.616523981Z" level=error msg="ContainerStatus for \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\": not found" May 10 00:12:11.616686 kubelet[2677]: E0510 00:12:11.616652 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\": not found" containerID="9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f" May 10 00:12:11.616724 kubelet[2677]: I0510 00:12:11.616692 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f"} err="failed to get container status \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a7766c2de9e9d1a8fda97ecb81d64d4716c8346148db49e2f1e1011a781007f\": not found" May 10 00:12:11.616750 kubelet[2677]: I0510 00:12:11.616724 2677 scope.go:117] "RemoveContainer" containerID="2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b" May 10 00:12:11.616962 containerd[1477]: time="2025-05-10T00:12:11.616931565Z" level=error msg="ContainerStatus for \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\": not found" May 10 00:12:11.617065 kubelet[2677]: E0510 00:12:11.617045 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\": not found" containerID="2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b" May 10 00:12:11.617105 kubelet[2677]: I0510 00:12:11.617072 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b"} err="failed to get container status \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2805ff037e7132b059249bd9889548c1bad5f997902349182372c05ab02b799b\": not found" May 10 00:12:11.617105 kubelet[2677]: I0510 00:12:11.617088 2677 scope.go:117] "RemoveContainer" containerID="6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48" May 10 00:12:11.617432 containerd[1477]: time="2025-05-10T00:12:11.617349189Z" level=error msg="ContainerStatus for \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\": not found" May 10 00:12:11.617540 kubelet[2677]: E0510 00:12:11.617495 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\": not found" containerID="6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48" May 10 00:12:11.617540 kubelet[2677]: I0510 00:12:11.617515 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48"} err="failed to get container status \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\": rpc error: code = NotFound desc = an error occurred when try to find container \"6156030d5bbc94c40fe227d486aaf46503c8f56bc1e699d3b76ea73547de9d48\": not found" May 10 00:12:11.617540 kubelet[2677]: I0510 00:12:11.617541 2677 scope.go:117] "RemoveContainer" containerID="8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2" May 10 00:12:11.619134 containerd[1477]: time="2025-05-10T00:12:11.618870559Z" level=info msg="RemoveContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\"" May 10 00:12:11.621882 containerd[1477]: time="2025-05-10T00:12:11.621793370Z" level=info msg="RemoveContainer for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" returns successfully" May 10 00:12:11.622107 kubelet[2677]: I0510 00:12:11.622036 2677 scope.go:117] "RemoveContainer" containerID="8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2" May 10 00:12:11.622367 containerd[1477]: time="2025-05-10T00:12:11.622230276Z" level=error msg="ContainerStatus for \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\": not found" May 10 00:12:11.622573 kubelet[2677]: E0510 00:12:11.622505 2677 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\": not found" containerID="8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2" May 10 00:12:11.622573 kubelet[2677]: I0510 00:12:11.622554 2677 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2"} err="failed to get container status \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b0e240c23434e91804441f650b7f938571c588dc424f3f9e3729a4b5c100cd2\": not found" May 10 00:12:11.722020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1-rootfs.mount: Deactivated successfully. May 10 00:12:11.722175 systemd[1]: var-lib-kubelet-pods-cd72303e\x2d16e5\x2d4560\x2d94d2\x2d001e935839f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqvlx.mount: Deactivated successfully. May 10 00:12:11.722298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd-rootfs.mount: Deactivated successfully. May 10 00:12:11.722397 systemd[1]: var-lib-kubelet-pods-8352003d\x2d04f6\x2d4883\x2d995f\x2dfca74baf50b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9wpv2.mount: Deactivated successfully. May 10 00:12:11.722510 systemd[1]: var-lib-kubelet-pods-8352003d\x2d04f6\x2d4883\x2d995f\x2dfca74baf50b9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:12:11.722607 systemd[1]: var-lib-kubelet-pods-8352003d\x2d04f6\x2d4883\x2d995f\x2dfca74baf50b9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:12:12.816112 sshd[4300]: pam_unix(sshd:session): session closed for user core May 10 00:12:12.822240 systemd[1]: sshd@23-138.199.169.250:22-147.75.109.163:38334.service: Deactivated successfully. May 10 00:12:12.825721 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:12:12.826800 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. May 10 00:12:12.827791 systemd-logind[1457]: Removed session 22. May 10 00:12:12.995802 systemd[1]: Started sshd@24-138.199.169.250:22-147.75.109.163:38342.service - OpenSSH per-connection server daemon (147.75.109.163:38342). May 10 00:12:13.596545 kubelet[2677]: I0510 00:12:13.596483 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8352003d-04f6-4883-995f-fca74baf50b9" path="/var/lib/kubelet/pods/8352003d-04f6-4883-995f-fca74baf50b9/volumes" May 10 00:12:13.597623 kubelet[2677]: I0510 00:12:13.597558 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd72303e-16e5-4560-94d2-001e935839f5" path="/var/lib/kubelet/pods/cd72303e-16e5-4560-94d2-001e935839f5/volumes" May 10 00:12:13.778586 kubelet[2677]: E0510 00:12:13.778499 2677 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:12:13.988673 sshd[4463]: Accepted publickey for core from 147.75.109.163 port 38342 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:12:13.990916 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:12:13.996277 systemd-logind[1457]: New session 23 of user core. May 10 00:12:14.004723 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542129 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd72303e-16e5-4560-94d2-001e935839f5" containerName="cilium-operator" May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542196 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="mount-cgroup" May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542205 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="apply-sysctl-overwrites" May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542211 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="mount-bpf-fs" May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542224 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="clean-cilium-state" May 10 00:12:15.542210 kubelet[2677]: E0510 00:12:15.542232 2677 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="cilium-agent" May 10 00:12:15.542210 kubelet[2677]: I0510 00:12:15.542258 2677 memory_manager.go:354] "RemoveStaleState removing state" podUID="8352003d-04f6-4883-995f-fca74baf50b9" containerName="cilium-agent" May 10 00:12:15.542210 kubelet[2677]: I0510 00:12:15.542265 2677 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd72303e-16e5-4560-94d2-001e935839f5" containerName="cilium-operator" May 10 00:12:15.554158 systemd[1]: Created slice kubepods-burstable-podee80e10c_60b5_49b5_8852_2520fd92a02b.slice - libcontainer container kubepods-burstable-podee80e10c_60b5_49b5_8852_2520fd92a02b.slice. May 10 00:12:15.684633 kubelet[2677]: I0510 00:12:15.683777 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-cilium-run\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.684633 kubelet[2677]: I0510 00:12:15.683860 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee80e10c-60b5-49b5-8852-2520fd92a02b-cilium-ipsec-secrets\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.684633 kubelet[2677]: I0510 00:12:15.683907 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-host-proc-sys-net\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.684633 kubelet[2677]: I0510 00:12:15.683949 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee80e10c-60b5-49b5-8852-2520fd92a02b-clustermesh-secrets\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.684633 kubelet[2677]: I0510 00:12:15.683990 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6j7p\" (UniqueName: \"kubernetes.io/projected/ee80e10c-60b5-49b5-8852-2520fd92a02b-kube-api-access-v6j7p\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684031 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-bpf-maps\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684070 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-etc-cni-netd\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684109 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee80e10c-60b5-49b5-8852-2520fd92a02b-cilium-config-path\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684144 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee80e10c-60b5-49b5-8852-2520fd92a02b-hubble-tls\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684182 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-hostproc\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685132 kubelet[2677]: I0510 00:12:15.684217 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-cni-path\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685621 kubelet[2677]: I0510 00:12:15.684264 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-lib-modules\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685621 kubelet[2677]: I0510 00:12:15.684304 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-host-proc-sys-kernel\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685621 kubelet[2677]: I0510 00:12:15.684347 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-xtables-lock\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.685621 kubelet[2677]: I0510 00:12:15.684387 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee80e10c-60b5-49b5-8852-2520fd92a02b-cilium-cgroup\") pod \"cilium-8rrw6\" (UID: \"ee80e10c-60b5-49b5-8852-2520fd92a02b\") " pod="kube-system/cilium-8rrw6" May 10 00:12:15.712765 sshd[4463]: pam_unix(sshd:session): session closed for user core May 10 00:12:15.717542 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. May 10 00:12:15.717652 systemd[1]: sshd@24-138.199.169.250:22-147.75.109.163:38342.service: Deactivated successfully. May 10 00:12:15.720844 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:12:15.723652 systemd-logind[1457]: Removed session 23. May 10 00:12:15.858419 containerd[1477]: time="2025-05-10T00:12:15.858358199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rrw6,Uid:ee80e10c-60b5-49b5-8852-2520fd92a02b,Namespace:kube-system,Attempt:0,}" May 10 00:12:15.882032 containerd[1477]: time="2025-05-10T00:12:15.881419720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:12:15.882032 containerd[1477]: time="2025-05-10T00:12:15.881501245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:12:15.882032 containerd[1477]: time="2025-05-10T00:12:15.881517166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:12:15.882032 containerd[1477]: time="2025-05-10T00:12:15.881617852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:12:15.893821 systemd[1]: Started sshd@25-138.199.169.250:22-147.75.109.163:38346.service - OpenSSH per-connection server daemon (147.75.109.163:38346). May 10 00:12:15.901911 systemd[1]: Started cri-containerd-78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4.scope - libcontainer container 78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4. May 10 00:12:15.929600 containerd[1477]: time="2025-05-10T00:12:15.929557641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rrw6,Uid:ee80e10c-60b5-49b5-8852-2520fd92a02b,Namespace:kube-system,Attempt:0,} returns sandbox id \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\"" May 10 00:12:15.934255 containerd[1477]: time="2025-05-10T00:12:15.934208035Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:12:15.951168 containerd[1477]: time="2025-05-10T00:12:15.951086311Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248\"" May 10 00:12:15.952433 containerd[1477]: time="2025-05-10T00:12:15.952231939Z" level=info msg="StartContainer for \"4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248\"" May 10 00:12:15.977614 systemd[1]: Started cri-containerd-4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248.scope - libcontainer container 4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248. May 10 00:12:16.008277 containerd[1477]: time="2025-05-10T00:12:16.007695332Z" level=info msg="StartContainer for \"4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248\" returns successfully" May 10 00:12:16.020863 systemd[1]: cri-containerd-4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248.scope: Deactivated successfully. May 10 00:12:16.074282 containerd[1477]: time="2025-05-10T00:12:16.074022970Z" level=info msg="shim disconnected" id=4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248 namespace=k8s.io May 10 00:12:16.074282 containerd[1477]: time="2025-05-10T00:12:16.074088894Z" level=warning msg="cleaning up after shim disconnected" id=4503bdd2aad1a7af19b637b3ab927db213920d9a5b2acffe175151c8bf570248 namespace=k8s.io May 10 00:12:16.074282 containerd[1477]: time="2025-05-10T00:12:16.074102614Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:16.594673 containerd[1477]: time="2025-05-10T00:12:16.594582237Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:12:16.611629 containerd[1477]: time="2025-05-10T00:12:16.611375868Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9\"" May 10 00:12:16.616397 containerd[1477]: time="2025-05-10T00:12:16.613611560Z" level=info msg="StartContainer for \"e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9\"" May 10 00:12:16.641614 systemd[1]: Started cri-containerd-e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9.scope - libcontainer container e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9. May 10 00:12:16.669684 containerd[1477]: time="2025-05-10T00:12:16.669636870Z" level=info msg="StartContainer for \"e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9\" returns successfully" May 10 00:12:16.679148 systemd[1]: cri-containerd-e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9.scope: Deactivated successfully. May 10 00:12:16.708328 containerd[1477]: time="2025-05-10T00:12:16.708153225Z" level=info msg="shim disconnected" id=e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9 namespace=k8s.io May 10 00:12:16.708328 containerd[1477]: time="2025-05-10T00:12:16.708235549Z" level=warning msg="cleaning up after shim disconnected" id=e96c60c234c485f49641d4c669ef93711779ff20d6361560d12b3b342ed0d2b9 namespace=k8s.io May 10 00:12:16.708328 containerd[1477]: time="2025-05-10T00:12:16.708252790Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:16.916503 sshd[4501]: Accepted publickey for core from 147.75.109.163 port 38346 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:12:16.919807 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:12:16.925020 systemd-logind[1457]: New session 24 of user core. May 10 00:12:16.936697 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:12:17.599719 containerd[1477]: time="2025-05-10T00:12:17.599584631Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:12:17.617656 sshd[4501]: pam_unix(sshd:session): session closed for user core May 10 00:12:17.619692 containerd[1477]: time="2025-05-10T00:12:17.619649337Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc\"" May 10 00:12:17.621627 containerd[1477]: time="2025-05-10T00:12:17.621517087Z" level=info msg="StartContainer for \"952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc\"" May 10 00:12:17.625841 systemd[1]: sshd@25-138.199.169.250:22-147.75.109.163:38346.service: Deactivated successfully. May 10 00:12:17.631992 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:12:17.634897 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. May 10 00:12:17.641594 systemd-logind[1457]: Removed session 24. May 10 00:12:17.659717 systemd[1]: Started cri-containerd-952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc.scope - libcontainer container 952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc. May 10 00:12:17.688341 containerd[1477]: time="2025-05-10T00:12:17.688289155Z" level=info msg="StartContainer for \"952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc\" returns successfully" May 10 00:12:17.693112 systemd[1]: cri-containerd-952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc.scope: Deactivated successfully. May 10 00:12:17.727220 containerd[1477]: time="2025-05-10T00:12:17.727161973Z" level=info msg="shim disconnected" id=952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc namespace=k8s.io May 10 00:12:17.727641 containerd[1477]: time="2025-05-10T00:12:17.727608640Z" level=warning msg="cleaning up after shim disconnected" id=952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc namespace=k8s.io May 10 00:12:17.727801 containerd[1477]: time="2025-05-10T00:12:17.727630521Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:17.795692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-952ab90286ed77eda972d71bcb13b907adfef39285a4a22fdd31c671963462bc-rootfs.mount: Deactivated successfully. May 10 00:12:17.803835 systemd[1]: Started sshd@26-138.199.169.250:22-147.75.109.163:51860.service - OpenSSH per-connection server daemon (147.75.109.163:51860). May 10 00:12:18.605091 containerd[1477]: time="2025-05-10T00:12:18.604914221Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:12:18.629770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205308788.mount: Deactivated successfully. May 10 00:12:18.632984 containerd[1477]: time="2025-05-10T00:12:18.632922838Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31\"" May 10 00:12:18.634283 containerd[1477]: time="2025-05-10T00:12:18.634247316Z" level=info msg="StartContainer for \"4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31\"" May 10 00:12:18.666818 systemd[1]: Started cri-containerd-4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31.scope - libcontainer container 4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31. May 10 00:12:18.696766 systemd[1]: cri-containerd-4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31.scope: Deactivated successfully. May 10 00:12:18.702242 containerd[1477]: time="2025-05-10T00:12:18.701982605Z" level=info msg="StartContainer for \"4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31\" returns successfully" May 10 00:12:18.733342 containerd[1477]: time="2025-05-10T00:12:18.733233454Z" level=info msg="shim disconnected" id=4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31 namespace=k8s.io May 10 00:12:18.733342 containerd[1477]: time="2025-05-10T00:12:18.733336940Z" level=warning msg="cleaning up after shim disconnected" id=4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31 namespace=k8s.io May 10 00:12:18.733669 containerd[1477]: time="2025-05-10T00:12:18.733356101Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:12:18.780568 kubelet[2677]: E0510 00:12:18.780504 2677 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:12:18.794103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4771c9d05bbe25072af31577088d4736c81d8c6d9d21b63c4f25fda768e7ef31-rootfs.mount: Deactivated successfully. May 10 00:12:18.814599 sshd[4710]: Accepted publickey for core from 147.75.109.163 port 51860 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:12:18.816768 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:12:18.821526 systemd-logind[1457]: New session 25 of user core. May 10 00:12:18.828655 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:12:19.498041 kubelet[2677]: I0510 00:12:19.497919 2677 setters.go:600] "Node became not ready" node="ci-4081-3-3-n-025f904aa2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:12:19Z","lastTransitionTime":"2025-05-10T00:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:12:19.608921 containerd[1477]: time="2025-05-10T00:12:19.608603449Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:12:19.627446 containerd[1477]: time="2025-05-10T00:12:19.626995338Z" level=info msg="CreateContainer within sandbox \"78099952f27b0794df6fac2cda864ca93bbf4d966b1cd0b3aa2115912c5c12a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2\"" May 10 00:12:19.629163 containerd[1477]: time="2025-05-10T00:12:19.628134325Z" level=info msg="StartContainer for \"c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2\"" May 10 00:12:19.664698 systemd[1]: Started cri-containerd-c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2.scope - libcontainer container c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2. May 10 00:12:19.699437 containerd[1477]: time="2025-05-10T00:12:19.698829873Z" level=info msg="StartContainer for \"c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2\" returns successfully" May 10 00:12:19.795169 systemd[1]: run-containerd-runc-k8s.io-c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2-runc.ropi3C.mount: Deactivated successfully. May 10 00:12:20.007528 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 10 00:12:20.634067 kubelet[2677]: I0510 00:12:20.633264 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8rrw6" podStartSLOduration=5.633244253 podStartE2EDuration="5.633244253s" podCreationTimestamp="2025-05-10 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:12:20.631083245 +0000 UTC m=+357.162797182" watchObservedRunningTime="2025-05-10 00:12:20.633244253 +0000 UTC m=+357.164958190" May 10 00:12:23.047710 systemd-networkd[1377]: lxc_health: Link UP May 10 00:12:23.066676 systemd-networkd[1377]: lxc_health: Gained carrier May 10 00:12:23.597447 containerd[1477]: time="2025-05-10T00:12:23.596715952Z" level=info msg="StopPodSandbox for \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\"" May 10 00:12:23.597447 containerd[1477]: time="2025-05-10T00:12:23.596813718Z" level=info msg="TearDown network for sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" successfully" May 10 00:12:23.597447 containerd[1477]: time="2025-05-10T00:12:23.596825159Z" level=info msg="StopPodSandbox for \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" returns successfully" May 10 00:12:23.600964 containerd[1477]: time="2025-05-10T00:12:23.600459135Z" level=info msg="RemovePodSandbox for \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\"" May 10 00:12:23.600964 containerd[1477]: time="2025-05-10T00:12:23.600517018Z" level=info msg="Forcibly stopping sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\"" May 10 00:12:23.600964 containerd[1477]: time="2025-05-10T00:12:23.600592862Z" level=info msg="TearDown network for sandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" successfully" May 10 00:12:23.611851 containerd[1477]: time="2025-05-10T00:12:23.611656080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:12:23.611851 containerd[1477]: time="2025-05-10T00:12:23.611726164Z" level=info msg="RemovePodSandbox \"7c73c15613f621806e9f7f61982eab7dd2391ff6f1e14aa90b0376c977ccd4dd\" returns successfully" May 10 00:12:23.612437 containerd[1477]: time="2025-05-10T00:12:23.612392804Z" level=info msg="StopPodSandbox for \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\"" May 10 00:12:23.612555 containerd[1477]: time="2025-05-10T00:12:23.612496010Z" level=info msg="TearDown network for sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" successfully" May 10 00:12:23.612555 containerd[1477]: time="2025-05-10T00:12:23.612518211Z" level=info msg="StopPodSandbox for \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" returns successfully" May 10 00:12:23.615440 containerd[1477]: time="2025-05-10T00:12:23.613075524Z" level=info msg="RemovePodSandbox for \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\"" May 10 00:12:23.615440 containerd[1477]: time="2025-05-10T00:12:23.613108046Z" level=info msg="Forcibly stopping sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\"" May 10 00:12:23.615916 containerd[1477]: time="2025-05-10T00:12:23.615775965Z" level=info msg="TearDown network for sandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" successfully" May 10 00:12:23.625431 containerd[1477]: time="2025-05-10T00:12:23.625268569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:12:23.625431 containerd[1477]: time="2025-05-10T00:12:23.625330333Z" level=info msg="RemovePodSandbox \"03f6d5857c1e11ce8200d3cc9c60d9d291903d97e8df5bb8165151b0c1c7ced1\" returns successfully" May 10 00:12:24.693629 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 10 00:12:25.860811 systemd[1]: run-containerd-runc-k8s.io-c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2-runc.AsLacb.mount: Deactivated successfully. May 10 00:12:30.205051 systemd[1]: run-containerd-runc-k8s.io-c3aad12fd8e43cf6b6375cb4d0fd8a729de7372d5ab5f9d38c27afca81a1f0c2-runc.PwHrCx.mount: Deactivated successfully. May 10 00:12:30.435805 sshd[4710]: pam_unix(sshd:session): session closed for user core May 10 00:12:30.440481 systemd[1]: sshd@26-138.199.169.250:22-147.75.109.163:51860.service: Deactivated successfully. May 10 00:12:30.444732 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:12:30.447744 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. May 10 00:12:30.449287 systemd-logind[1457]: Removed session 25.