May 15 23:43:17.898887 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 23:43:17.898915 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 22:19:24 -00 2025 May 15 23:43:17.898926 kernel: KASLR enabled May 15 23:43:17.898933 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 15 23:43:17.898939 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 May 15 23:43:17.898944 kernel: random: crng init done May 15 23:43:17.898962 kernel: secureboot: Secure boot disabled May 15 23:43:17.898970 kernel: ACPI: Early table checksum verification disabled May 15 23:43:17.898977 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 15 23:43:17.898985 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 15 23:43:17.898991 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.898997 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899003 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899009 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899016 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899024 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899030 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899037 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899043 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:43:17.899049 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 15 23:43:17.899058 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 15 23:43:17.899066 kernel: NUMA: Failed to initialise from firmware May 15 23:43:17.899073 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 15 23:43:17.899079 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 15 23:43:17.899085 kernel: Zone ranges: May 15 23:43:17.899093 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 15 23:43:17.899099 kernel: DMA32 empty May 15 23:43:17.899105 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 15 23:43:17.899111 kernel: Movable zone start for each node May 15 23:43:17.899117 kernel: Early memory node ranges May 15 23:43:17.899123 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 15 23:43:17.899129 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 15 23:43:17.899136 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 15 23:43:17.899142 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 15 23:43:17.899148 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 15 23:43:17.899154 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 15 23:43:17.899160 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 15 23:43:17.899168 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 15 23:43:17.899174 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 15 23:43:17.899180 kernel: psci: probing for conduit method from ACPI. May 15 23:43:17.902004 kernel: psci: PSCIv1.1 detected in firmware. May 15 23:43:17.902015 kernel: psci: Using standard PSCI v0.2 function IDs May 15 23:43:17.902022 kernel: psci: Trusted OS migration not required May 15 23:43:17.902032 kernel: psci: SMC Calling Convention v1.1 May 15 23:43:17.902039 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 23:43:17.902045 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 23:43:17.902052 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 23:43:17.902059 kernel: pcpu-alloc: [0] 0 [0] 1 May 15 23:43:17.902066 kernel: Detected PIPT I-cache on CPU0 May 15 23:43:17.902072 kernel: CPU features: detected: GIC system register CPU interface May 15 23:43:17.902079 kernel: CPU features: detected: Hardware dirty bit management May 15 23:43:17.902086 kernel: CPU features: detected: Spectre-v4 May 15 23:43:17.902092 kernel: CPU features: detected: Spectre-BHB May 15 23:43:17.902101 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 23:43:17.902108 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 23:43:17.902114 kernel: CPU features: detected: ARM erratum 1418040 May 15 23:43:17.902121 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 23:43:17.902127 kernel: alternatives: applying boot alternatives May 15 23:43:17.902135 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 15 23:43:17.902143 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:43:17.902149 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:43:17.902156 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:43:17.902163 kernel: Fallback order for Node 0: 0 May 15 23:43:17.902169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 15 23:43:17.902177 kernel: Policy zone: Normal May 15 23:43:17.902199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:43:17.902206 kernel: software IO TLB: area num 2. May 15 23:43:17.902213 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 15 23:43:17.902220 kernel: Memory: 3882616K/4096000K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 213384K reserved, 0K cma-reserved) May 15 23:43:17.902227 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 23:43:17.902234 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:43:17.902242 kernel: rcu: RCU event tracing is enabled. May 15 23:43:17.902248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 23:43:17.902255 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:43:17.902262 kernel: Tracing variant of Tasks RCU enabled. May 15 23:43:17.902269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:43:17.902278 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 23:43:17.902284 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 23:43:17.902291 kernel: GICv3: 256 SPIs implemented May 15 23:43:17.902298 kernel: GICv3: 0 Extended SPIs implemented May 15 23:43:17.902304 kernel: Root IRQ handler: gic_handle_irq May 15 23:43:17.902311 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 23:43:17.902318 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 23:43:17.902324 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 23:43:17.902331 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 15 23:43:17.902338 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 15 23:43:17.902345 kernel: GICv3: using LPI property table @0x00000001000e0000 May 15 23:43:17.902353 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 15 23:43:17.902360 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:43:17.902367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:43:17.902374 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 23:43:17.902381 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 23:43:17.902388 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 23:43:17.902395 kernel: Console: colour dummy device 80x25 May 15 23:43:17.902402 kernel: ACPI: Core revision 20230628 May 15 23:43:17.902409 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 23:43:17.902416 kernel: pid_max: default: 32768 minimum: 301 May 15 23:43:17.902424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:43:17.902431 kernel: landlock: Up and running. May 15 23:43:17.902438 kernel: SELinux: Initializing. May 15 23:43:17.902445 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:43:17.902452 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:43:17.902459 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 15 23:43:17.902466 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 23:43:17.902473 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 23:43:17.902480 kernel: rcu: Hierarchical SRCU implementation. May 15 23:43:17.902488 kernel: rcu: Max phase no-delay instances is 400. May 15 23:43:17.902495 kernel: Platform MSI: ITS@0x8080000 domain created May 15 23:43:17.902502 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 23:43:17.902509 kernel: Remapping and enabling EFI services. May 15 23:43:17.902516 kernel: smp: Bringing up secondary CPUs ... May 15 23:43:17.902523 kernel: Detected PIPT I-cache on CPU1 May 15 23:43:17.902531 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 23:43:17.902538 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 15 23:43:17.902544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:43:17.902551 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 23:43:17.902560 kernel: smp: Brought up 1 node, 2 CPUs May 15 23:43:17.902567 kernel: SMP: Total of 2 processors activated. May 15 23:43:17.902580 kernel: CPU features: detected: 32-bit EL0 Support May 15 23:43:17.902588 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 23:43:17.902596 kernel: CPU features: detected: Common not Private translations May 15 23:43:17.902603 kernel: CPU features: detected: CRC32 instructions May 15 23:43:17.902610 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 23:43:17.902618 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 23:43:17.902625 kernel: CPU features: detected: LSE atomic instructions May 15 23:43:17.902632 kernel: CPU features: detected: Privileged Access Never May 15 23:43:17.902641 kernel: CPU features: detected: RAS Extension Support May 15 23:43:17.902648 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 23:43:17.902655 kernel: CPU: All CPU(s) started at EL1 May 15 23:43:17.902663 kernel: alternatives: applying system-wide alternatives May 15 23:43:17.902670 kernel: devtmpfs: initialized May 15 23:43:17.902677 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:43:17.902686 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 23:43:17.902693 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:43:17.902701 kernel: SMBIOS 3.0.0 present. May 15 23:43:17.902708 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 15 23:43:17.902715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:43:17.902723 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 23:43:17.902730 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 23:43:17.902737 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 23:43:17.902745 kernel: audit: initializing netlink subsys (disabled) May 15 23:43:17.902753 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 May 15 23:43:17.902761 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:43:17.902768 kernel: cpuidle: using governor menu May 15 23:43:17.902775 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 23:43:17.902782 kernel: ASID allocator initialised with 32768 entries May 15 23:43:17.902790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:43:17.902797 kernel: Serial: AMBA PL011 UART driver May 15 23:43:17.902804 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 23:43:17.902812 kernel: Modules: 0 pages in range for non-PLT usage May 15 23:43:17.902820 kernel: Modules: 508944 pages in range for PLT usage May 15 23:43:17.902828 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:43:17.902835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:43:17.902842 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 23:43:17.902849 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 23:43:17.902857 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:43:17.902864 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:43:17.902871 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 23:43:17.902878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 23:43:17.902887 kernel: ACPI: Added _OSI(Module Device) May 15 23:43:17.902894 kernel: ACPI: Added _OSI(Processor Device) May 15 23:43:17.902901 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:43:17.902909 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:43:17.902916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:43:17.902924 kernel: ACPI: Interpreter enabled May 15 23:43:17.902931 kernel: ACPI: Using GIC for interrupt routing May 15 23:43:17.902938 kernel: ACPI: MCFG table detected, 1 entries May 15 23:43:17.902945 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 23:43:17.902963 kernel: printk: console [ttyAMA0] enabled May 15 23:43:17.902971 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:43:17.903139 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:43:17.903689 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 23:43:17.903768 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 23:43:17.903832 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 23:43:17.904390 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 23:43:17.904413 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 23:43:17.904421 kernel: PCI host bridge to bus 0000:00 May 15 23:43:17.904513 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 23:43:17.904574 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 23:43:17.904647 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 23:43:17.904708 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:43:17.904789 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 23:43:17.904868 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 15 23:43:17.904935 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 15 23:43:17.905059 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 15 23:43:17.905138 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.905293 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 15 23:43:17.905394 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.905469 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 15 23:43:17.905542 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.905610 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 15 23:43:17.905683 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.905749 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 15 23:43:17.905820 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.905895 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 15 23:43:17.905990 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.906060 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 15 23:43:17.906134 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.906268 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 15 23:43:17.906349 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.906413 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 15 23:43:17.906487 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 15 23:43:17.906563 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 15 23:43:17.906642 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 15 23:43:17.906708 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 15 23:43:17.906783 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 15 23:43:17.906850 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 15 23:43:17.906920 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:43:17.907004 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 15 23:43:17.907079 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 15 23:43:17.907146 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 15 23:43:17.907254 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 15 23:43:17.907325 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 15 23:43:17.907392 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 15 23:43:17.907471 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 15 23:43:17.907622 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 15 23:43:17.907706 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 15 23:43:17.907787 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 15 23:43:17.907858 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 15 23:43:17.907936 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 15 23:43:17.908074 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 15 23:43:17.908143 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 15 23:43:17.912323 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 15 23:43:17.912432 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 15 23:43:17.912519 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 15 23:43:17.912603 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 15 23:43:17.912692 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 23:43:17.912757 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 15 23:43:17.912822 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 15 23:43:17.913004 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 23:43:17.913080 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 23:43:17.913148 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 15 23:43:17.913330 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 23:43:17.913404 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 15 23:43:17.913471 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 15 23:43:17.913539 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 23:43:17.913604 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 15 23:43:17.913668 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 23:43:17.913737 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 15 23:43:17.913803 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 15 23:43:17.913867 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 15 23:43:17.913938 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 15 23:43:17.914029 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 15 23:43:17.914099 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 15 23:43:17.914168 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 15 23:43:17.915232 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 15 23:43:17.915317 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 15 23:43:17.915410 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 15 23:43:17.915490 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 15 23:43:17.915556 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 15 23:43:17.915625 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 15 23:43:17.915690 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 15 23:43:17.915754 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 15 23:43:17.915823 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 15 23:43:17.915889 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 15 23:43:17.915982 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 15 23:43:17.916056 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 15 23:43:17.916125 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 15 23:43:17.916241 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 15 23:43:17.916318 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 15 23:43:17.916385 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 15 23:43:17.916463 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 15 23:43:17.916534 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 15 23:43:17.916604 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 15 23:43:17.916685 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 15 23:43:17.916763 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 15 23:43:17.916826 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 15 23:43:17.916895 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 15 23:43:17.916977 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 15 23:43:17.917053 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 15 23:43:17.917118 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 15 23:43:17.917277 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 15 23:43:17.917356 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 15 23:43:17.917422 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 15 23:43:17.917484 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 15 23:43:17.917547 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 15 23:43:17.917612 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 15 23:43:17.917681 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 15 23:43:17.917768 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 15 23:43:17.917852 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 15 23:43:17.917916 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 15 23:43:17.918035 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 15 23:43:17.918116 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 15 23:43:17.918256 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 15 23:43:17.918347 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 15 23:43:17.918420 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 15 23:43:17.918482 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 15 23:43:17.918545 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 15 23:43:17.918610 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 15 23:43:17.918675 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 15 23:43:17.918739 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 15 23:43:17.918807 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 15 23:43:17.918882 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 15 23:43:17.918966 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:43:17.919037 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 15 23:43:17.919103 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 15 23:43:17.919173 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 15 23:43:17.919305 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 15 23:43:17.919373 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 15 23:43:17.919447 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 15 23:43:17.919528 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 15 23:43:17.919601 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 15 23:43:17.919665 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 15 23:43:17.919730 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 15 23:43:17.919802 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 15 23:43:17.919870 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 15 23:43:17.919937 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 15 23:43:17.920055 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 15 23:43:17.920123 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 15 23:43:17.920215 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 15 23:43:17.920297 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 15 23:43:17.920363 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 15 23:43:17.920428 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 15 23:43:17.920496 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 15 23:43:17.920560 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 15 23:43:17.920632 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 15 23:43:17.920698 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 15 23:43:17.920766 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 15 23:43:17.920830 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 15 23:43:17.920894 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 15 23:43:17.921002 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 15 23:43:17.921083 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 15 23:43:17.921152 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 15 23:43:17.921267 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 15 23:43:17.921333 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 15 23:43:17.921396 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 15 23:43:17.921458 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 15 23:43:17.921539 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 15 23:43:17.921604 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 15 23:43:17.921673 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 15 23:43:17.921737 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 15 23:43:17.921800 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 15 23:43:17.921863 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 15 23:43:17.921925 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 15 23:43:17.922004 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 15 23:43:17.922070 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 15 23:43:17.922135 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 15 23:43:17.922221 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 15 23:43:17.922289 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 15 23:43:17.922354 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 15 23:43:17.922420 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 15 23:43:17.922486 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 15 23:43:17.922553 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 23:43:17.922611 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 23:43:17.922672 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 23:43:17.922747 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 15 23:43:17.922808 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 15 23:43:17.922868 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 15 23:43:17.922935 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 15 23:43:17.923044 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 15 23:43:17.923108 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 15 23:43:17.923181 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 15 23:43:17.923313 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 15 23:43:17.923373 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 15 23:43:17.923440 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 15 23:43:17.923498 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 15 23:43:17.923557 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 15 23:43:17.923624 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 15 23:43:17.923682 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 15 23:43:17.923741 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 15 23:43:17.923807 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 15 23:43:17.923870 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 15 23:43:17.923928 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 15 23:43:17.924013 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 15 23:43:17.924074 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 15 23:43:17.924134 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 15 23:43:17.924254 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 15 23:43:17.924323 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 15 23:43:17.924389 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 15 23:43:17.924456 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 15 23:43:17.924517 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 15 23:43:17.924576 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 15 23:43:17.924586 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 23:43:17.924594 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 23:43:17.924602 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 23:43:17.924609 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 23:43:17.924619 kernel: iommu: Default domain type: Translated May 15 23:43:17.924627 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 23:43:17.924635 kernel: efivars: Registered efivars operations May 15 23:43:17.924642 kernel: vgaarb: loaded May 15 23:43:17.924650 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 23:43:17.924657 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:43:17.924665 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:43:17.924672 kernel: pnp: PnP ACPI init May 15 23:43:17.924755 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 23:43:17.924769 kernel: pnp: PnP ACPI: found 1 devices May 15 23:43:17.924777 kernel: NET: Registered PF_INET protocol family May 15 23:43:17.924785 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:43:17.924793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:43:17.924800 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:43:17.924808 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:43:17.924816 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:43:17.924824 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:43:17.924833 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:43:17.924841 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:43:17.924848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:43:17.924922 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 15 23:43:17.924933 kernel: PCI: CLS 0 bytes, default 64 May 15 23:43:17.924941 kernel: kvm [1]: HYP mode not available May 15 23:43:17.924948 kernel: Initialise system trusted keyrings May 15 23:43:17.925003 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:43:17.925012 kernel: Key type asymmetric registered May 15 23:43:17.925021 kernel: Asymmetric key parser 'x509' registered May 15 23:43:17.925028 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 23:43:17.925036 kernel: io scheduler mq-deadline registered May 15 23:43:17.925044 kernel: io scheduler kyber registered May 15 23:43:17.925051 kernel: io scheduler bfq registered May 15 23:43:17.925059 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 15 23:43:17.925147 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 15 23:43:17.925306 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 15 23:43:17.925380 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.925448 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 15 23:43:17.925513 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 15 23:43:17.925578 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.925665 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 15 23:43:17.925737 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 15 23:43:17.925806 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.925878 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 15 23:43:17.925941 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 15 23:43:17.926028 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.926097 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 15 23:43:17.926163 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 15 23:43:17.926248 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.926317 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 15 23:43:17.926381 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 15 23:43:17.926445 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.926514 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 15 23:43:17.926603 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 15 23:43:17.926676 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.926742 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 15 23:43:17.926806 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 15 23:43:17.926871 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.926881 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 15 23:43:17.926945 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 15 23:43:17.927030 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 15 23:43:17.927096 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 15 23:43:17.927106 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 23:43:17.927114 kernel: ACPI: button: Power Button [PWRB] May 15 23:43:17.927122 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 23:43:17.927274 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 15 23:43:17.927359 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 15 23:43:17.927371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:43:17.927383 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 15 23:43:17.927451 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 15 23:43:17.927461 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 15 23:43:17.927469 kernel: thunder_xcv, ver 1.0 May 15 23:43:17.927477 kernel: thunder_bgx, ver 1.0 May 15 23:43:17.927484 kernel: nicpf, ver 1.0 May 15 23:43:17.927492 kernel: nicvf, ver 1.0 May 15 23:43:17.927566 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 23:43:17.927627 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T23:43:17 UTC (1747352597) May 15 23:43:17.927639 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 23:43:17.927647 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 23:43:17.927655 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 23:43:17.927662 kernel: watchdog: Hard watchdog permanently disabled May 15 23:43:17.927670 kernel: NET: Registered PF_INET6 protocol family May 15 23:43:17.927678 kernel: Segment Routing with IPv6 May 15 23:43:17.927685 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:43:17.927693 kernel: NET: Registered PF_PACKET protocol family May 15 23:43:17.927702 kernel: Key type dns_resolver registered May 15 23:43:17.927710 kernel: registered taskstats version 1 May 15 23:43:17.927717 kernel: Loading compiled-in X.509 certificates May 15 23:43:17.927725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: c5ee9c587519d4ef57ff0de9630e786a4c7faded' May 15 23:43:17.927733 kernel: Key type .fscrypt registered May 15 23:43:17.927740 kernel: Key type fscrypt-provisioning registered May 15 23:43:17.927748 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:43:17.927756 kernel: ima: Allocated hash algorithm: sha1 May 15 23:43:17.927765 kernel: ima: No architecture policies found May 15 23:43:17.927774 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 23:43:17.927782 kernel: clk: Disabling unused clocks May 15 23:43:17.927789 kernel: Freeing unused kernel memory: 39744K May 15 23:43:17.927797 kernel: Run /init as init process May 15 23:43:17.927804 kernel: with arguments: May 15 23:43:17.927812 kernel: /init May 15 23:43:17.927819 kernel: with environment: May 15 23:43:17.927827 kernel: HOME=/ May 15 23:43:17.927834 kernel: TERM=linux May 15 23:43:17.927843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:43:17.927853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:43:17.927863 systemd[1]: Detected virtualization kvm. May 15 23:43:17.927871 systemd[1]: Detected architecture arm64. May 15 23:43:17.927879 systemd[1]: Running in initrd. May 15 23:43:17.927887 systemd[1]: No hostname configured, using default hostname. May 15 23:43:17.927894 systemd[1]: Hostname set to . May 15 23:43:17.927904 systemd[1]: Initializing machine ID from VM UUID. May 15 23:43:17.927913 systemd[1]: Queued start job for default target initrd.target. May 15 23:43:17.927921 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:43:17.927929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:43:17.927938 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:43:17.927946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:43:17.927998 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:43:17.928007 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:43:17.928020 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:43:17.928029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:43:17.928037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:43:17.928045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:43:17.928053 systemd[1]: Reached target paths.target - Path Units. May 15 23:43:17.928061 systemd[1]: Reached target slices.target - Slice Units. May 15 23:43:17.928069 systemd[1]: Reached target swap.target - Swaps. May 15 23:43:17.928078 systemd[1]: Reached target timers.target - Timer Units. May 15 23:43:17.928087 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:43:17.928095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:43:17.928103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:43:17.928111 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:43:17.928119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:43:17.928127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:43:17.928135 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:43:17.928143 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:43:17.928153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:43:17.928161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:43:17.928169 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:43:17.928177 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:43:17.928249 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:43:17.928259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:43:17.928267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:43:17.928275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:43:17.928286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:43:17.928294 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:43:17.928331 systemd-journald[237]: Collecting audit messages is disabled. May 15 23:43:17.928354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:43:17.928363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:17.928371 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:43:17.928379 kernel: Bridge firewalling registered May 15 23:43:17.928387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:43:17.928395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:43:17.928406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:43:17.928415 systemd-journald[237]: Journal started May 15 23:43:17.928434 systemd-journald[237]: Runtime Journal (/run/log/journal/449e524145ae4837ab8c0e4700d13599) is 8.0M, max 76.6M, 68.6M free. May 15 23:43:17.899026 systemd-modules-load[238]: Inserted module 'overlay' May 15 23:43:17.919518 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 23:43:17.932237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:43:17.936785 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:43:17.936839 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:43:17.948735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:43:17.951844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:43:17.961653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:43:17.965432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:43:17.972477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:43:17.974115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:43:17.981377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:43:17.988495 dracut-cmdline[271]: dracut-dracut-053 May 15 23:43:17.991982 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 15 23:43:18.018798 systemd-resolved[276]: Positive Trust Anchors: May 15 23:43:18.018873 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:43:18.018905 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:43:18.023829 systemd-resolved[276]: Defaulting to hostname 'linux'. May 15 23:43:18.024909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:43:18.028890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:43:18.090255 kernel: SCSI subsystem initialized May 15 23:43:18.095230 kernel: Loading iSCSI transport class v2.0-870. May 15 23:43:18.103257 kernel: iscsi: registered transport (tcp) May 15 23:43:18.117210 kernel: iscsi: registered transport (qla4xxx) May 15 23:43:18.117271 kernel: QLogic iSCSI HBA Driver May 15 23:43:18.162780 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:43:18.172384 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:43:18.196528 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:43:18.196605 kernel: device-mapper: uevent: version 1.0.3 May 15 23:43:18.196618 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:43:18.248256 kernel: raid6: neonx8 gen() 15703 MB/s May 15 23:43:18.265259 kernel: raid6: neonx4 gen() 15473 MB/s May 15 23:43:18.282252 kernel: raid6: neonx2 gen() 13082 MB/s May 15 23:43:18.299256 kernel: raid6: neonx1 gen() 10379 MB/s May 15 23:43:18.316231 kernel: raid6: int64x8 gen() 6908 MB/s May 15 23:43:18.333251 kernel: raid6: int64x4 gen() 7306 MB/s May 15 23:43:18.350245 kernel: raid6: int64x2 gen() 6051 MB/s May 15 23:43:18.367242 kernel: raid6: int64x1 gen() 5021 MB/s May 15 23:43:18.367295 kernel: raid6: using algorithm neonx8 gen() 15703 MB/s May 15 23:43:18.384250 kernel: raid6: .... xor() 11826 MB/s, rmw enabled May 15 23:43:18.384307 kernel: raid6: using neon recovery algorithm May 15 23:43:18.389403 kernel: xor: measuring software checksum speed May 15 23:43:18.389457 kernel: 8regs : 19778 MB/sec May 15 23:43:18.389479 kernel: 32regs : 17297 MB/sec May 15 23:43:18.390224 kernel: arm64_neon : 27007 MB/sec May 15 23:43:18.390269 kernel: xor: using function: arm64_neon (27007 MB/sec) May 15 23:43:18.440252 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:43:18.456264 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:43:18.462483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:43:18.476103 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 15 23:43:18.479658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:43:18.489926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:43:18.505302 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 15 23:43:18.544711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:43:18.550401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:43:18.601139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:43:18.610543 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:43:18.628722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:43:18.634550 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:43:18.635508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:43:18.636110 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:43:18.647582 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:43:18.663678 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:43:18.697623 kernel: scsi host0: Virtio SCSI HBA May 15 23:43:18.712307 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 23:43:18.713264 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 23:43:18.719666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:43:18.719795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:43:18.723083 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:43:18.726247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:43:18.726439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:18.727105 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:43:18.735639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:43:18.743492 kernel: ACPI: bus type USB registered May 15 23:43:18.743556 kernel: usbcore: registered new interface driver usbfs May 15 23:43:18.743568 kernel: usbcore: registered new interface driver hub May 15 23:43:18.743578 kernel: usbcore: registered new device driver usb May 15 23:43:18.764919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:18.773176 kernel: sr 0:0:0:0: Power-on or device reset occurred May 15 23:43:18.773429 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 15 23:43:18.775222 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 23:43:18.774431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:43:18.786621 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 15 23:43:18.786860 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 15 23:43:18.787002 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 15 23:43:18.787126 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 15 23:43:18.787243 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 15 23:43:18.787326 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 15 23:43:18.788208 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 15 23:43:18.790510 kernel: hub 1-0:1.0: USB hub found May 15 23:43:18.790730 kernel: hub 1-0:1.0: 4 ports detected May 15 23:43:18.792204 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 15 23:43:18.796243 kernel: hub 2-0:1.0: USB hub found May 15 23:43:18.796485 kernel: hub 2-0:1.0: 4 ports detected May 15 23:43:18.799269 kernel: sd 0:0:0:1: Power-on or device reset occurred May 15 23:43:18.800499 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 15 23:43:18.800688 kernel: sd 0:0:0:1: [sda] Write Protect is off May 15 23:43:18.800773 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 15 23:43:18.801526 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 23:43:18.805445 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:43:18.805496 kernel: GPT:17805311 != 80003071 May 15 23:43:18.806407 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:43:18.806431 kernel: GPT:17805311 != 80003071 May 15 23:43:18.806441 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:43:18.807453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 23:43:18.809239 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 15 23:43:18.811797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:43:18.849642 kernel: BTRFS: device fsid 462ff9f1-7a02-4839-b355-edf30dab0598 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (529) May 15 23:43:18.852998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 23:43:18.859208 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (511) May 15 23:43:18.866127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 23:43:18.877365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 23:43:18.878104 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 23:43:18.884409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 23:43:18.892407 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:43:18.901160 disk-uuid[576]: Primary Header is updated. May 15 23:43:18.901160 disk-uuid[576]: Secondary Entries is updated. May 15 23:43:18.901160 disk-uuid[576]: Secondary Header is updated. May 15 23:43:18.912210 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 23:43:19.032341 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 15 23:43:19.169670 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 15 23:43:19.169759 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 15 23:43:19.170560 kernel: usbcore: registered new interface driver usbhid May 15 23:43:19.170601 kernel: usbhid: USB HID core driver May 15 23:43:19.277265 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 15 23:43:19.408253 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 15 23:43:19.461230 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 15 23:43:19.925040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 23:43:19.925225 disk-uuid[578]: The operation has completed successfully. May 15 23:43:19.975517 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:43:19.975643 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:43:19.991470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:43:19.997305 sh[593]: Success May 15 23:43:20.013213 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 23:43:20.072663 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:43:20.076770 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:43:20.079452 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:43:20.115246 kernel: BTRFS info (device dm-0): first mount of filesystem 462ff9f1-7a02-4839-b355-edf30dab0598 May 15 23:43:20.115313 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 23:43:20.115332 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:43:20.116511 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:43:20.116547 kernel: BTRFS info (device dm-0): using free space tree May 15 23:43:20.123224 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 15 23:43:20.125983 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:43:20.127706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:43:20.133444 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:43:20.138429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:43:20.151759 kernel: BTRFS info (device sda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:43:20.151828 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:43:20.151839 kernel: BTRFS info (device sda6): using free space tree May 15 23:43:20.156249 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 23:43:20.156310 kernel: BTRFS info (device sda6): auto enabling async discard May 15 23:43:20.165211 kernel: BTRFS info (device sda6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:43:20.165357 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 23:43:20.170685 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:43:20.179781 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:43:20.255478 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:43:20.266661 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:43:20.281913 ignition[689]: Ignition 2.20.0 May 15 23:43:20.281924 ignition[689]: Stage: fetch-offline May 15 23:43:20.282019 ignition[689]: no configs at "/usr/lib/ignition/base.d" May 15 23:43:20.282031 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:20.282257 ignition[689]: parsed url from cmdline: "" May 15 23:43:20.282261 ignition[689]: no config URL provided May 15 23:43:20.282266 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:43:20.282274 ignition[689]: no config at "/usr/lib/ignition/user.ign" May 15 23:43:20.282279 ignition[689]: failed to fetch config: resource requires networking May 15 23:43:20.282606 ignition[689]: Ignition finished successfully May 15 23:43:20.287571 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:43:20.290049 systemd-networkd[781]: lo: Link UP May 15 23:43:20.290059 systemd-networkd[781]: lo: Gained carrier May 15 23:43:20.291884 systemd-networkd[781]: Enumeration completed May 15 23:43:20.292353 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:43:20.292871 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:20.292875 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:43:20.294229 systemd[1]: Reached target network.target - Network. May 15 23:43:20.294880 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:20.294884 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:43:20.295552 systemd-networkd[781]: eth0: Link UP May 15 23:43:20.295556 systemd-networkd[781]: eth0: Gained carrier May 15 23:43:20.295564 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:20.302443 systemd-networkd[781]: eth1: Link UP May 15 23:43:20.302460 systemd-networkd[781]: eth1: Gained carrier May 15 23:43:20.302471 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:20.304450 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 23:43:20.318755 ignition[784]: Ignition 2.20.0 May 15 23:43:20.318765 ignition[784]: Stage: fetch May 15 23:43:20.318976 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 15 23:43:20.318988 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:20.319084 ignition[784]: parsed url from cmdline: "" May 15 23:43:20.319087 ignition[784]: no config URL provided May 15 23:43:20.319094 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:43:20.319101 ignition[784]: no config at "/usr/lib/ignition/user.ign" May 15 23:43:20.319200 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 15 23:43:20.320042 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 23:43:20.332310 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:43:20.374332 systemd-networkd[781]: eth0: DHCPv4 address 168.119.108.125/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 15 23:43:20.521146 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 15 23:43:20.527554 ignition[784]: GET result: OK May 15 23:43:20.527702 ignition[784]: parsing config with SHA512: 03f846531bdd4bb79c1841a75f4e49b7afcdbfb1a54ed31e53900a65fa52c87d99a6f0f6dadfbebdd6b568cc036e3be4e3e76e3147c43ea9f56ba679a71cda02 May 15 23:43:20.534728 unknown[784]: fetched base config from "system" May 15 23:43:20.534744 unknown[784]: fetched base config from "system" May 15 23:43:20.535427 ignition[784]: fetch: fetch complete May 15 23:43:20.534753 unknown[784]: fetched user config from "hetzner" May 15 23:43:20.535435 ignition[784]: fetch: fetch passed May 15 23:43:20.537180 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 23:43:20.535497 ignition[784]: Ignition finished successfully May 15 23:43:20.543550 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:43:20.556825 ignition[791]: Ignition 2.20.0 May 15 23:43:20.558066 ignition[791]: Stage: kargs May 15 23:43:20.558405 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 15 23:43:20.558417 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:20.561120 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:43:20.559401 ignition[791]: kargs: kargs passed May 15 23:43:20.559451 ignition[791]: Ignition finished successfully May 15 23:43:20.568480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:43:20.583214 ignition[797]: Ignition 2.20.0 May 15 23:43:20.583225 ignition[797]: Stage: disks May 15 23:43:20.583400 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 15 23:43:20.583409 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:20.586336 ignition[797]: disks: disks passed May 15 23:43:20.586414 ignition[797]: Ignition finished successfully May 15 23:43:20.588475 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:43:20.590048 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:43:20.591181 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:43:20.591870 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:43:20.593354 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:43:20.594407 systemd[1]: Reached target basic.target - Basic System. May 15 23:43:20.601546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:43:20.619719 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 15 23:43:20.624680 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:43:20.629373 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:43:20.684233 kernel: EXT4-fs (sda9): mounted filesystem 759e3456-2e58-4307-81e1-19f20d3141c2 r/w with ordered data mode. Quota mode: none. May 15 23:43:20.685353 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:43:20.688543 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:43:20.698406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:43:20.703117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:43:20.706947 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 23:43:20.711595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:43:20.711648 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:43:20.716981 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:43:20.718715 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) May 15 23:43:20.721060 kernel: BTRFS info (device sda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:43:20.721121 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:43:20.721142 kernel: BTRFS info (device sda6): using free space tree May 15 23:43:20.725245 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 23:43:20.725293 kernel: BTRFS info (device sda6): auto enabling async discard May 15 23:43:20.727495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:43:20.730982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:43:20.785364 coreos-metadata[815]: May 15 23:43:20.785 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 15 23:43:20.789801 coreos-metadata[815]: May 15 23:43:20.788 INFO Fetch successful May 15 23:43:20.789801 coreos-metadata[815]: May 15 23:43:20.788 INFO wrote hostname ci-4152-2-3-n-32b6392e63 to /sysroot/etc/hostname May 15 23:43:20.791513 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 23:43:20.795063 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:43:20.801804 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 15 23:43:20.807034 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:43:20.811143 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:43:20.912871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:43:20.917474 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:43:20.921394 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:43:20.932233 kernel: BTRFS info (device sda6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:43:20.948979 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:43:20.958137 ignition[930]: INFO : Ignition 2.20.0 May 15 23:43:20.958137 ignition[930]: INFO : Stage: mount May 15 23:43:20.960361 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:43:20.960361 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:20.960361 ignition[930]: INFO : mount: mount passed May 15 23:43:20.960361 ignition[930]: INFO : Ignition finished successfully May 15 23:43:20.961743 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:43:20.967404 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:43:21.116902 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:43:21.123485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:43:21.150284 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) May 15 23:43:21.150416 kernel: BTRFS info (device sda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:43:21.152570 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:43:21.152617 kernel: BTRFS info (device sda6): using free space tree May 15 23:43:21.156257 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 23:43:21.156318 kernel: BTRFS info (device sda6): auto enabling async discard May 15 23:43:21.159388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:43:21.177154 ignition[957]: INFO : Ignition 2.20.0 May 15 23:43:21.177829 ignition[957]: INFO : Stage: files May 15 23:43:21.178268 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:43:21.178268 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:21.179316 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 15 23:43:21.180234 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:43:21.180234 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:43:21.184436 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:43:21.186244 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:43:21.186244 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:43:21.184913 unknown[957]: wrote ssh authorized keys file for user: core May 15 23:43:21.189243 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 15 23:43:21.189243 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 15 23:43:21.302333 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:43:21.503198 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 15 23:43:21.503198 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:43:21.506806 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 23:43:22.143695 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:43:22.148560 systemd-networkd[781]: eth0: Gained IPv6LL May 15 23:43:22.276472 systemd-networkd[781]: eth1: Gained IPv6LL May 15 23:43:22.391214 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:43:22.392678 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:43:22.403075 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:43:22.403075 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:43:22.403075 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 15 23:43:22.958134 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:43:23.174300 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:43:23.174300 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 23:43:23.177437 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 23:43:23.177437 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 23:43:23.177437 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:43:23.187396 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:43:23.187396 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:43:23.187396 ignition[957]: INFO : files: files passed May 15 23:43:23.187396 ignition[957]: INFO : Ignition finished successfully May 15 23:43:23.180108 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:43:23.188439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:43:23.192130 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:43:23.195854 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:43:23.197280 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:43:23.206432 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:43:23.206432 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:43:23.208928 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:43:23.211331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:43:23.212293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:43:23.217413 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:43:23.259733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:43:23.259949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:43:23.261449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:43:23.263194 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:43:23.264795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:43:23.266364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:43:23.286479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:43:23.293374 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:43:23.308082 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:43:23.309650 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:43:23.310448 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:43:23.311519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:43:23.311646 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:43:23.313043 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:43:23.313749 systemd[1]: Stopped target basic.target - Basic System. May 15 23:43:23.314978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:43:23.316138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:43:23.317159 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:43:23.318283 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:43:23.319538 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:43:23.320943 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:43:23.322115 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:43:23.323402 systemd[1]: Stopped target swap.target - Swaps. May 15 23:43:23.324359 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:43:23.324499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:43:23.325944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:43:23.326615 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:43:23.327655 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:43:23.327741 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:43:23.328829 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:43:23.328965 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:43:23.330587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:43:23.330720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:43:23.332032 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:43:23.332125 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:43:23.333256 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 23:43:23.333352 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 23:43:23.344523 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:43:23.350465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:43:23.351772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:43:23.351969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:43:23.354467 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:43:23.354581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:43:23.364721 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:43:23.364831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:43:23.367661 ignition[1011]: INFO : Ignition 2.20.0 May 15 23:43:23.367661 ignition[1011]: INFO : Stage: umount May 15 23:43:23.367661 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:43:23.367661 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 15 23:43:23.367661 ignition[1011]: INFO : umount: umount passed May 15 23:43:23.367661 ignition[1011]: INFO : Ignition finished successfully May 15 23:43:23.369441 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:43:23.369569 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:43:23.371984 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:43:23.372092 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:43:23.375678 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:43:23.375744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:43:23.378546 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 23:43:23.378606 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 23:43:23.379331 systemd[1]: Stopped target network.target - Network. May 15 23:43:23.380150 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:43:23.381943 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:43:23.385347 systemd[1]: Stopped target paths.target - Path Units. May 15 23:43:23.386599 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:43:23.386809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:43:23.389654 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:43:23.390541 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:43:23.394485 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:43:23.394547 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:43:23.395499 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:43:23.395547 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:43:23.398314 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:43:23.398383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:43:23.399048 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:43:23.399119 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:43:23.404383 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:43:23.405020 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:43:23.409685 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:43:23.415245 systemd-networkd[781]: eth0: DHCPv6 lease lost May 15 23:43:23.416057 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:43:23.416290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:43:23.422458 systemd-networkd[781]: eth1: DHCPv6 lease lost May 15 23:43:23.422523 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:43:23.422635 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:43:23.426070 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:43:23.426308 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:43:23.431802 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:43:23.431864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:43:23.438433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:43:23.439224 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:43:23.439298 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:43:23.440273 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:43:23.440313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:43:23.442769 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:43:23.442817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:43:23.446233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:43:23.447799 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:43:23.447890 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:43:23.460713 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:43:23.460827 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:43:23.463540 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:43:23.463670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:43:23.465505 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:43:23.465716 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:43:23.467466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:43:23.467509 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:43:23.468389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:43:23.468423 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:43:23.469053 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:43:23.469120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:43:23.469894 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:43:23.469951 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:43:23.471021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:43:23.471082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:43:23.478769 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:43:23.479478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:43:23.479544 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:43:23.480286 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:43:23.480330 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:43:23.483520 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:43:23.483582 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:43:23.487899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:43:23.487992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:23.491627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:43:23.491734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:43:23.496370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:43:23.502498 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:43:23.510772 systemd[1]: Switching root. May 15 23:43:23.554013 systemd-journald[237]: Journal stopped May 15 23:43:24.487951 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 23:43:24.488016 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:43:24.488029 kernel: SELinux: policy capability open_perms=1 May 15 23:43:24.488039 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:43:24.488048 kernel: SELinux: policy capability always_check_network=0 May 15 23:43:24.488058 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:43:24.488068 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:43:24.488078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:43:24.488090 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:43:24.488172 kernel: audit: type=1403 audit(1747352603.687:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:43:24.488243 systemd[1]: Successfully loaded SELinux policy in 38.879ms. May 15 23:43:24.488275 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.577ms. May 15 23:43:24.488288 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:43:24.488298 systemd[1]: Detected virtualization kvm. May 15 23:43:24.488309 systemd[1]: Detected architecture arm64. May 15 23:43:24.488319 systemd[1]: Detected first boot. May 15 23:43:24.488329 systemd[1]: Hostname set to . May 15 23:43:24.488344 systemd[1]: Initializing machine ID from VM UUID. May 15 23:43:24.488359 zram_generator::config[1054]: No configuration found. May 15 23:43:24.488371 systemd[1]: Populated /etc with preset unit settings. May 15 23:43:24.488385 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:43:24.488397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:43:24.488407 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:43:24.488419 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:43:24.488430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:43:24.488441 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:43:24.488451 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:43:24.488462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:43:24.488474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:43:24.488485 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:43:24.488497 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:43:24.488507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:43:24.488517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:43:24.488529 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:43:24.488541 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:43:24.488552 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:43:24.488563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:43:24.488574 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 23:43:24.488584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:43:24.488594 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:43:24.488604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:43:24.488616 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:43:24.488627 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:43:24.488638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:43:24.488652 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:43:24.488663 systemd[1]: Reached target slices.target - Slice Units. May 15 23:43:24.488673 systemd[1]: Reached target swap.target - Swaps. May 15 23:43:24.488725 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:43:24.488740 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:43:24.488753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:43:24.488764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:43:24.488775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:43:24.488786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:43:24.488797 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:43:24.488807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:43:24.488818 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:43:24.488832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:43:24.488845 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:43:24.488857 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:43:24.488869 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:43:24.488880 systemd[1]: Reached target machines.target - Containers. May 15 23:43:24.488890 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:43:24.488912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:43:24.488927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:43:24.488938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:43:24.488949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:43:24.488962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:43:24.488972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:43:24.488983 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:43:24.488993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:43:24.489006 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:43:24.489019 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:43:24.489030 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:43:24.489041 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:43:24.489051 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:43:24.489061 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:43:24.489072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:43:24.489085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:43:24.489096 kernel: loop: module loaded May 15 23:43:24.489107 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:43:24.489119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:43:24.489132 kernel: ACPI: bus type drm_connector registered May 15 23:43:24.489142 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:43:24.489152 systemd[1]: Stopped verity-setup.service. May 15 23:43:24.489163 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:43:24.489175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:43:24.489256 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:43:24.489270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:43:24.489280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:43:24.489291 kernel: fuse: init (API version 7.39) May 15 23:43:24.489301 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:43:24.489312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:43:24.489322 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:43:24.489333 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:43:24.489347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:43:24.489357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:43:24.489368 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:43:24.489378 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:43:24.489423 systemd-journald[1121]: Collecting audit messages is disabled. May 15 23:43:24.489453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:43:24.489477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:43:24.489492 systemd-journald[1121]: Journal started May 15 23:43:24.489515 systemd-journald[1121]: Runtime Journal (/run/log/journal/449e524145ae4837ab8c0e4700d13599) is 8.0M, max 76.6M, 68.6M free. May 15 23:43:24.223435 systemd[1]: Queued start job for default target multi-user.target. May 15 23:43:24.241427 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 23:43:24.241982 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:43:24.491572 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:43:24.494257 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:43:24.495157 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:43:24.495489 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:43:24.497319 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:43:24.497552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:43:24.499601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:43:24.500533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:43:24.501681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:43:24.515573 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:43:24.522436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:43:24.528391 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:43:24.531309 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:43:24.531353 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:43:24.535212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 23:43:24.546794 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:43:24.562564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:43:24.565241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:43:24.569387 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:43:24.576305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:43:24.577096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:43:24.585583 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:43:24.587104 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:43:24.589395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:43:24.602352 systemd-journald[1121]: Time spent on flushing to /var/log/journal/449e524145ae4837ab8c0e4700d13599 is 55.834ms for 1128 entries. May 15 23:43:24.602352 systemd-journald[1121]: System Journal (/var/log/journal/449e524145ae4837ab8c0e4700d13599) is 8.0M, max 584.8M, 576.8M free. May 15 23:43:24.677858 systemd-journald[1121]: Received client request to flush runtime journal. May 15 23:43:24.677910 kernel: loop0: detected capacity change from 0 to 116808 May 15 23:43:24.677927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:43:24.603659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:43:24.608891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:43:24.613246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:43:24.617813 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:43:24.620822 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:43:24.624671 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:43:24.625739 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:43:24.630960 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:43:24.645666 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 23:43:24.649177 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:43:24.667286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:43:24.685360 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:43:24.702236 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:43:24.705452 kernel: loop1: detected capacity change from 0 to 8 May 15 23:43:24.705812 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 23:43:24.725356 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 15 23:43:24.725386 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 15 23:43:24.729435 kernel: loop2: detected capacity change from 0 to 211168 May 15 23:43:24.732313 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:43:24.736441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:43:24.752772 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:43:24.787220 kernel: loop3: detected capacity change from 0 to 113536 May 15 23:43:24.800149 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:43:24.814481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:43:24.836108 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 15 23:43:24.836489 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 15 23:43:24.840912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:43:24.845272 kernel: loop4: detected capacity change from 0 to 116808 May 15 23:43:24.862307 kernel: loop5: detected capacity change from 0 to 8 May 15 23:43:24.864221 kernel: loop6: detected capacity change from 0 to 211168 May 15 23:43:24.890416 kernel: loop7: detected capacity change from 0 to 113536 May 15 23:43:24.905959 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 15 23:43:24.907122 (sd-merge)[1195]: Merged extensions into '/usr'. May 15 23:43:24.916499 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:43:24.916671 systemd[1]: Reloading... May 15 23:43:25.035233 zram_generator::config[1219]: No configuration found. May 15 23:43:25.136650 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:43:25.208804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:43:25.263822 systemd[1]: Reloading finished in 346 ms. May 15 23:43:25.315226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:43:25.316417 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:43:25.328030 systemd[1]: Starting ensure-sysext.service... May 15 23:43:25.335012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:43:25.337764 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... May 15 23:43:25.337787 systemd[1]: Reloading... May 15 23:43:25.368453 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:43:25.368744 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:43:25.369433 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:43:25.369646 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 15 23:43:25.369703 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 15 23:43:25.380084 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:43:25.380100 systemd-tmpfiles[1260]: Skipping /boot May 15 23:43:25.390647 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:43:25.390665 systemd-tmpfiles[1260]: Skipping /boot May 15 23:43:25.425225 zram_generator::config[1292]: No configuration found. May 15 23:43:25.520645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:43:25.577058 systemd[1]: Reloading finished in 238 ms. May 15 23:43:25.594765 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:43:25.600002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:43:25.616548 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:43:25.630424 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:43:25.635001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:43:25.639593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:43:25.644042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:43:25.646682 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:43:25.651424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:43:25.652791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:43:25.662502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:43:25.667860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:43:25.669446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:43:25.672383 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:43:25.675392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:43:25.675565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:43:25.679553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:43:25.689437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:43:25.690401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:43:25.691301 systemd[1]: Finished ensure-sysext.service. May 15 23:43:25.700265 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:43:25.719159 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:43:25.727691 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:43:25.728029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:43:25.729664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:43:25.730028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:43:25.731426 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:43:25.732592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:43:25.732745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:43:25.736796 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:43:25.740722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:43:25.747511 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:43:25.748502 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:43:25.749265 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:43:25.769223 systemd-udevd[1332]: Using default interface naming scheme 'v255'. May 15 23:43:25.782597 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:43:25.785638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:43:25.788721 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:43:25.789998 augenrules[1365]: No rules May 15 23:43:25.792128 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:43:25.792510 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:43:25.796975 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:43:25.808792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:43:25.815354 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:43:25.913536 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 23:43:25.922415 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:43:25.924471 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:43:25.929562 systemd-networkd[1382]: lo: Link UP May 15 23:43:25.929577 systemd-networkd[1382]: lo: Gained carrier May 15 23:43:25.930842 systemd-networkd[1382]: Enumeration completed May 15 23:43:25.930964 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:43:25.932382 systemd-timesyncd[1344]: No network connectivity, watching for changes. May 15 23:43:25.939426 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:43:25.940831 systemd-resolved[1329]: Positive Trust Anchors: May 15 23:43:25.941247 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:43:25.941280 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:43:25.950841 systemd-resolved[1329]: Using system hostname 'ci-4152-2-3-n-32b6392e63'. May 15 23:43:25.956719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:43:25.958425 systemd[1]: Reached target network.target - Network. May 15 23:43:25.959252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:43:26.059211 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1381) May 15 23:43:26.064305 kernel: mousedev: PS/2 mouse device common for all mice May 15 23:43:26.065254 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:26.065262 systemd-networkd[1382]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:43:26.066596 systemd-networkd[1382]: eth1: Link UP May 15 23:43:26.066754 systemd-networkd[1382]: eth1: Gained carrier May 15 23:43:26.066776 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:26.078776 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:26.078786 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:43:26.080823 systemd-networkd[1382]: eth0: Link UP May 15 23:43:26.080964 systemd-networkd[1382]: eth0: Gained carrier May 15 23:43:26.081034 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:43:26.095339 systemd-networkd[1382]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:43:26.097037 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. May 15 23:43:26.127345 systemd-networkd[1382]: eth0: DHCPv4 address 168.119.108.125/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 15 23:43:26.128139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 23:43:26.130259 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. May 15 23:43:26.137425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:43:26.150483 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 15 23:43:26.150611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:43:26.155390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:43:26.158395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:43:26.161433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:43:26.162040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:43:26.162079 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:43:26.165469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:43:26.167177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:43:26.167839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:43:26.188687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:43:26.188854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:43:26.190824 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:43:26.191356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:43:26.194903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:43:26.194949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:43:26.210560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:43:26.211222 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 15 23:43:26.211281 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 23:43:26.211300 kernel: [drm] features: -context_init May 15 23:43:26.214225 kernel: [drm] number of scanouts: 1 May 15 23:43:26.214274 kernel: [drm] number of cap sets: 0 May 15 23:43:26.215207 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 15 23:43:26.224220 kernel: Console: switching to colour frame buffer device 160x50 May 15 23:43:26.240020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:43:26.241260 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 15 23:43:26.241344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:26.247711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:43:26.312681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:43:26.389328 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:43:26.399667 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:43:26.416028 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:43:26.445063 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:43:26.446802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:43:26.448093 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:43:26.449539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:43:26.450565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:43:26.451461 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:43:26.452122 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:43:26.453012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:43:26.453686 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:43:26.453726 systemd[1]: Reached target paths.target - Path Units. May 15 23:43:26.454230 systemd[1]: Reached target timers.target - Timer Units. May 15 23:43:26.455823 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:43:26.458036 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:43:26.467086 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:43:26.471899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:43:26.474035 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:43:26.475158 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:43:26.475929 systemd[1]: Reached target basic.target - Basic System. May 15 23:43:26.476709 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:43:26.476839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:43:26.482619 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:43:26.487226 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 23:43:26.490576 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:43:26.491149 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:43:26.499483 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:43:26.502911 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:43:26.503484 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:43:26.508541 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:43:26.513076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:43:26.516566 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 15 23:43:26.520699 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:43:26.526215 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:43:26.550465 jq[1451]: false May 15 23:43:26.549580 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:43:26.551351 extend-filesystems[1452]: Found loop4 May 15 23:43:26.558828 extend-filesystems[1452]: Found loop5 May 15 23:43:26.558828 extend-filesystems[1452]: Found loop6 May 15 23:43:26.558828 extend-filesystems[1452]: Found loop7 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda May 15 23:43:26.558828 extend-filesystems[1452]: Found sda1 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda2 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda3 May 15 23:43:26.558828 extend-filesystems[1452]: Found usr May 15 23:43:26.558828 extend-filesystems[1452]: Found sda4 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda6 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda7 May 15 23:43:26.558828 extend-filesystems[1452]: Found sda9 May 15 23:43:26.558828 extend-filesystems[1452]: Checking size of /dev/sda9 May 15 23:43:26.641121 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 15 23:43:26.552475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:43:26.642043 coreos-metadata[1449]: May 15 23:43:26.558 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 15 23:43:26.642043 coreos-metadata[1449]: May 15 23:43:26.565 INFO Fetch successful May 15 23:43:26.642043 coreos-metadata[1449]: May 15 23:43:26.565 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 15 23:43:26.642043 coreos-metadata[1449]: May 15 23:43:26.566 INFO Fetch successful May 15 23:43:26.642282 extend-filesystems[1452]: Resized partition /dev/sda9 May 15 23:43:26.553063 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:43:26.647309 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) May 15 23:43:26.558058 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:43:26.657035 dbus-daemon[1450]: [system] SELinux support is enabled May 15 23:43:26.564370 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:43:26.570630 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:43:26.582461 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:43:26.582628 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:43:26.582926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:43:26.673775 tar[1477]: linux-arm64/LICENSE May 15 23:43:26.673775 tar[1477]: linux-arm64/helm May 15 23:43:26.583154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:43:26.657480 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:43:26.674290 jq[1465]: true May 15 23:43:26.661459 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:43:26.661485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:43:26.662376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:43:26.662394 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:43:26.663634 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:43:26.678665 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:43:26.678870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:43:26.703216 update_engine[1463]: I20250515 23:43:26.699751 1463 main.cc:92] Flatcar Update Engine starting May 15 23:43:26.707257 update_engine[1463]: I20250515 23:43:26.704763 1463 update_check_scheduler.cc:74] Next update check in 7m33s May 15 23:43:26.704926 systemd[1]: Started update-engine.service - Update Engine. May 15 23:43:26.712777 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:43:26.722200 jq[1490]: true May 15 23:43:26.747273 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1389) May 15 23:43:26.755974 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 15 23:43:26.777580 extend-filesystems[1470]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 23:43:26.777580 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 5 May 15 23:43:26.777580 extend-filesystems[1470]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 15 23:43:26.802656 extend-filesystems[1452]: Resized filesystem in /dev/sda9 May 15 23:43:26.802656 extend-filesystems[1452]: Found sr0 May 15 23:43:26.779525 systemd-logind[1460]: New seat seat0. May 15 23:43:26.784432 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:43:26.788297 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:43:26.816975 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) May 15 23:43:26.817000 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 15 23:43:26.817239 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:43:26.837790 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 23:43:26.838820 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:43:26.863738 bash[1522]: Updated "/home/core/.ssh/authorized_keys" May 15 23:43:26.866624 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:43:26.885088 systemd[1]: Starting sshkeys.service... May 15 23:43:26.916342 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 23:43:26.935798 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 23:43:27.029275 coreos-metadata[1529]: May 15 23:43:27.026 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 15 23:43:27.031292 coreos-metadata[1529]: May 15 23:43:27.031 INFO Fetch successful May 15 23:43:27.034313 unknown[1529]: wrote ssh authorized keys file for user: core May 15 23:43:27.056736 containerd[1485]: time="2025-05-15T23:43:27.056632040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:43:27.071377 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" May 15 23:43:27.072658 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 23:43:27.077730 systemd[1]: Finished sshkeys.service. May 15 23:43:27.091639 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:43:27.127576 containerd[1485]: time="2025-05-15T23:43:27.127518200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.133558 containerd[1485]: time="2025-05-15T23:43:27.133503760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:43:27.133558 containerd[1485]: time="2025-05-15T23:43:27.133548160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:43:27.133558 containerd[1485]: time="2025-05-15T23:43:27.133567160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:43:27.133759 containerd[1485]: time="2025-05-15T23:43:27.133736560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:43:27.133788 containerd[1485]: time="2025-05-15T23:43:27.133759200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.133847 containerd[1485]: time="2025-05-15T23:43:27.133827200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:43:27.133847 containerd[1485]: time="2025-05-15T23:43:27.133842880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.134081 containerd[1485]: time="2025-05-15T23:43:27.134057200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:43:27.134081 containerd[1485]: time="2025-05-15T23:43:27.134079400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.134133 containerd[1485]: time="2025-05-15T23:43:27.134093680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:43:27.134133 containerd[1485]: time="2025-05-15T23:43:27.134103000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.134281 containerd[1485]: time="2025-05-15T23:43:27.134258760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.134499 containerd[1485]: time="2025-05-15T23:43:27.134476880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:43:27.134604 containerd[1485]: time="2025-05-15T23:43:27.134584600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:43:27.134604 containerd[1485]: time="2025-05-15T23:43:27.134601760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:43:27.134689 containerd[1485]: time="2025-05-15T23:43:27.134673320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:43:27.134733 containerd[1485]: time="2025-05-15T23:43:27.134717840Z" level=info msg="metadata content store policy set" policy=shared May 15 23:43:27.141483 containerd[1485]: time="2025-05-15T23:43:27.141431680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:43:27.141622 containerd[1485]: time="2025-05-15T23:43:27.141506240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:43:27.141622 containerd[1485]: time="2025-05-15T23:43:27.141526840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:43:27.141622 containerd[1485]: time="2025-05-15T23:43:27.141543400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:43:27.141622 containerd[1485]: time="2025-05-15T23:43:27.141559200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.141731680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.141994600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142097240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142117360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142133360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142147880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142160840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142172440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142210160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142229200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142246880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142259800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142273 containerd[1485]: time="2025-05-15T23:43:27.142272160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142293160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142307840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142320840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142334720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142347400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142361720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142373280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142385720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142398360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142412560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142426040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142438960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142451800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142466280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:43:27.142537 containerd[1485]: time="2025-05-15T23:43:27.142488400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142502360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142514160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142691280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142709360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142720400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142732840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142741480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142754360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142763760Z" level=info msg="NRI interface is disabled by configuration." May 15 23:43:27.142786 containerd[1485]: time="2025-05-15T23:43:27.142775000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:43:27.148087 containerd[1485]: time="2025-05-15T23:43:27.143169000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:43:27.148087 containerd[1485]: time="2025-05-15T23:43:27.147266360Z" level=info msg="Connect containerd service" May 15 23:43:27.148087 containerd[1485]: time="2025-05-15T23:43:27.147323080Z" level=info msg="using legacy CRI server" May 15 23:43:27.148087 containerd[1485]: time="2025-05-15T23:43:27.147331240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:43:27.148087 containerd[1485]: time="2025-05-15T23:43:27.147582400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.152407680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.152923040Z" level=info msg="Start subscribing containerd event" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.152987160Z" level=info msg="Start recovering state" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153008640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153050160Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153065120Z" level=info msg="Start event monitor" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153077360Z" level=info msg="Start snapshots syncer" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153086920Z" level=info msg="Start cni network conf syncer for default" May 15 23:43:27.155256 containerd[1485]: time="2025-05-15T23:43:27.153096800Z" level=info msg="Start streaming server" May 15 23:43:27.153377 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:43:27.155657 containerd[1485]: time="2025-05-15T23:43:27.155627040Z" level=info msg="containerd successfully booted in 0.101787s" May 15 23:43:27.170353 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:43:27.195242 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:43:27.205484 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:43:27.212496 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:43:27.212702 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:43:27.223104 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:43:27.233916 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:43:27.242906 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:43:27.251103 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 23:43:27.252380 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:43:27.317317 tar[1477]: linux-arm64/README.md May 15 23:43:27.332558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:43:27.844383 systemd-networkd[1382]: eth1: Gained IPv6LL May 15 23:43:27.845322 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. May 15 23:43:27.847693 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:43:27.850551 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:43:27.862013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:43:27.866137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:43:27.904127 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:43:28.036380 systemd-networkd[1382]: eth0: Gained IPv6LL May 15 23:43:28.037535 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. May 15 23:43:28.659420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:43:28.660763 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:43:28.665646 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:43:28.666122 systemd[1]: Startup finished in 786ms (kernel) + 5.994s (initrd) + 5.016s (userspace) = 11.796s. May 15 23:43:29.182280 kubelet[1580]: E0515 23:43:29.182218 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:43:29.185634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:43:29.186102 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:43:39.322838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:43:39.330540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:43:39.466494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:43:39.482744 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:43:39.532802 kubelet[1598]: E0515 23:43:39.532697 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:43:39.536386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:43:39.536594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:43:41.860486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:43:41.868617 systemd[1]: Started sshd@0-168.119.108.125:22-45.227.255.115:37664.service - OpenSSH per-connection server daemon (45.227.255.115:37664). May 15 23:43:42.017105 sshd[1606]: Invalid user reboot from 45.227.255.115 port 37664 May 15 23:43:42.034411 sshd-session[1608]: pam_faillock(sshd:auth): User unknown May 15 23:43:42.036892 sshd[1606]: Postponed keyboard-interactive for invalid user reboot from 45.227.255.115 port 37664 ssh2 [preauth] May 15 23:43:42.048932 sshd-session[1608]: pam_unix(sshd:auth): check pass; user unknown May 15 23:43:42.048984 sshd-session[1608]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.227.255.115 May 15 23:43:42.049619 sshd-session[1608]: pam_faillock(sshd:auth): User unknown May 15 23:43:43.982619 sshd[1606]: PAM: Permission denied for illegal user reboot from 45.227.255.115 May 15 23:43:43.983227 sshd[1606]: Failed keyboard-interactive/pam for invalid user reboot from 45.227.255.115 port 37664 ssh2 May 15 23:43:43.996981 sshd[1606]: Received disconnect from 45.227.255.115 port 37664:11: Client disconnecting normally [preauth] May 15 23:43:43.996981 sshd[1606]: Disconnected from invalid user reboot 45.227.255.115 port 37664 [preauth] May 15 23:43:44.000285 systemd[1]: sshd@0-168.119.108.125:22-45.227.255.115:37664.service: Deactivated successfully. May 15 23:43:49.572757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:43:49.580594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:43:49.714388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:43:49.726842 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:43:49.779817 kubelet[1619]: E0515 23:43:49.779680 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:43:49.782708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:43:49.782936 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:43:58.184066 systemd-timesyncd[1344]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). May 15 23:43:58.184158 systemd-timesyncd[1344]: Initial clock synchronization to Thu 2025-05-15 23:43:58.459330 UTC. May 15 23:43:59.825272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 23:43:59.841655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:43:59.980290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:43:59.991656 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:00.038034 kubelet[1634]: E0515 23:44:00.037950 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:00.040592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:00.040750 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:44:10.072925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 23:44:10.080173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:44:10.212473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:44:10.218371 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:10.262657 kubelet[1649]: E0515 23:44:10.262606 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:10.265744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:10.265917 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:44:11.915285 update_engine[1463]: I20250515 23:44:11.914790 1463 update_attempter.cc:509] Updating boot flags... May 15 23:44:11.966348 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1665) May 15 23:44:12.021320 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1661) May 15 23:44:12.080266 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1661) May 15 23:44:20.322840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 15 23:44:20.331506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:44:20.460058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:44:20.465699 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:20.513902 kubelet[1684]: E0515 23:44:20.513830 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:20.517597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:20.517961 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:44:30.572989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 15 23:44:30.580552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:44:30.732475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:44:30.734169 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:30.775820 kubelet[1700]: E0515 23:44:30.775677 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:30.778839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:30.779009 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:44:40.822765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 15 23:44:40.839608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:44:40.963316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:44:40.968805 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:41.010829 kubelet[1715]: E0515 23:44:41.010780 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:41.014837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:41.015065 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:44:51.072957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 15 23:44:51.081537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:44:51.221481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:44:51.223425 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:44:51.265081 kubelet[1730]: E0515 23:44:51.264998 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:44:51.269073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:44:51.269269 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:45:01.322766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 15 23:45:01.330643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:01.469465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:01.483947 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:45:01.528292 kubelet[1745]: E0515 23:45:01.528177 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:45:01.531905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:45:01.532322 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:45:11.079370 systemd[1]: Started sshd@1-168.119.108.125:22-139.178.68.195:37704.service - OpenSSH per-connection server daemon (139.178.68.195:37704). May 15 23:45:11.572942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 15 23:45:11.581594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:11.720963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:11.726375 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:45:11.765891 kubelet[1763]: E0515 23:45:11.765814 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:45:11.769922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:45:11.770106 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:45:12.094693 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 37704 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:12.097799 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:12.108267 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:45:12.114548 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:45:12.118751 systemd-logind[1460]: New session 1 of user core. May 15 23:45:12.126714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:45:12.145753 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:45:12.150472 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:45:12.259729 systemd[1772]: Queued start job for default target default.target. May 15 23:45:12.272148 systemd[1772]: Created slice app.slice - User Application Slice. May 15 23:45:12.272816 systemd[1772]: Reached target paths.target - Paths. May 15 23:45:12.272845 systemd[1772]: Reached target timers.target - Timers. May 15 23:45:12.275091 systemd[1772]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:45:12.301536 systemd[1772]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:45:12.302040 systemd[1772]: Reached target sockets.target - Sockets. May 15 23:45:12.302289 systemd[1772]: Reached target basic.target - Basic System. May 15 23:45:12.302539 systemd[1772]: Reached target default.target - Main User Target. May 15 23:45:12.302689 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:45:12.303705 systemd[1772]: Startup finished in 145ms. May 15 23:45:12.310614 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:45:13.023500 systemd[1]: Started sshd@2-168.119.108.125:22-139.178.68.195:37712.service - OpenSSH per-connection server daemon (139.178.68.195:37712). May 15 23:45:14.021989 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 37712 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:14.024070 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:14.031491 systemd-logind[1460]: New session 2 of user core. May 15 23:45:14.038587 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:45:14.711605 sshd[1785]: Connection closed by 139.178.68.195 port 37712 May 15 23:45:14.712533 sshd-session[1783]: pam_unix(sshd:session): session closed for user core May 15 23:45:14.716350 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:45:14.717736 systemd[1]: sshd@2-168.119.108.125:22-139.178.68.195:37712.service: Deactivated successfully. May 15 23:45:14.720566 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. May 15 23:45:14.721769 systemd-logind[1460]: Removed session 2. May 15 23:45:14.894714 systemd[1]: Started sshd@3-168.119.108.125:22-139.178.68.195:47704.service - OpenSSH per-connection server daemon (139.178.68.195:47704). May 15 23:45:15.905046 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 47704 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:15.907498 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:15.913397 systemd-logind[1460]: New session 3 of user core. May 15 23:45:15.919560 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:45:16.597201 sshd[1792]: Connection closed by 139.178.68.195 port 47704 May 15 23:45:16.596523 sshd-session[1790]: pam_unix(sshd:session): session closed for user core May 15 23:45:16.600713 systemd[1]: sshd@3-168.119.108.125:22-139.178.68.195:47704.service: Deactivated successfully. May 15 23:45:16.602961 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:45:16.604926 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. May 15 23:45:16.606377 systemd-logind[1460]: Removed session 3. May 15 23:45:16.779681 systemd[1]: Started sshd@4-168.119.108.125:22-139.178.68.195:47706.service - OpenSSH per-connection server daemon (139.178.68.195:47706). May 15 23:45:17.784135 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 47706 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:17.786373 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:17.791651 systemd-logind[1460]: New session 4 of user core. May 15 23:45:17.800535 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:45:18.482606 sshd[1799]: Connection closed by 139.178.68.195 port 47706 May 15 23:45:18.483652 sshd-session[1797]: pam_unix(sshd:session): session closed for user core May 15 23:45:18.488570 systemd[1]: sshd@4-168.119.108.125:22-139.178.68.195:47706.service: Deactivated successfully. May 15 23:45:18.491509 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:45:18.492951 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. May 15 23:45:18.494119 systemd-logind[1460]: Removed session 4. May 15 23:45:18.666636 systemd[1]: Started sshd@5-168.119.108.125:22-139.178.68.195:47716.service - OpenSSH per-connection server daemon (139.178.68.195:47716). May 15 23:45:19.659353 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 47716 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:19.661476 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:19.668614 systemd-logind[1460]: New session 5 of user core. May 15 23:45:19.678575 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:45:20.196178 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:45:20.196580 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:45:20.211622 sudo[1807]: pam_unix(sudo:session): session closed for user root May 15 23:45:20.373701 sshd[1806]: Connection closed by 139.178.68.195 port 47716 May 15 23:45:20.375117 sshd-session[1804]: pam_unix(sshd:session): session closed for user core May 15 23:45:20.380788 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. May 15 23:45:20.382017 systemd[1]: sshd@5-168.119.108.125:22-139.178.68.195:47716.service: Deactivated successfully. May 15 23:45:20.385264 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:45:20.386570 systemd-logind[1460]: Removed session 5. May 15 23:45:20.561025 systemd[1]: Started sshd@6-168.119.108.125:22-139.178.68.195:47724.service - OpenSSH per-connection server daemon (139.178.68.195:47724). May 15 23:45:21.560632 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 47724 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:21.563371 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:21.568588 systemd-logind[1460]: New session 6 of user core. May 15 23:45:21.576611 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:45:21.822750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 15 23:45:21.832475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:21.973484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:21.975780 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:45:22.014772 kubelet[1823]: E0515 23:45:22.014730 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:45:22.017666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:45:22.017955 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:45:22.091564 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:45:22.092057 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:45:22.096167 sudo[1831]: pam_unix(sudo:session): session closed for user root May 15 23:45:22.102905 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:45:22.103236 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:45:22.122840 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:45:22.155831 augenrules[1853]: No rules May 15 23:45:22.158495 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:45:22.158694 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:45:22.160514 sudo[1830]: pam_unix(sudo:session): session closed for user root May 15 23:45:22.322340 sshd[1814]: Connection closed by 139.178.68.195 port 47724 May 15 23:45:22.323162 sshd-session[1812]: pam_unix(sshd:session): session closed for user core May 15 23:45:22.328872 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. May 15 23:45:22.329903 systemd[1]: sshd@6-168.119.108.125:22-139.178.68.195:47724.service: Deactivated successfully. May 15 23:45:22.331942 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:45:22.333117 systemd-logind[1460]: Removed session 6. May 15 23:45:22.499767 systemd[1]: Started sshd@7-168.119.108.125:22-139.178.68.195:47734.service - OpenSSH per-connection server daemon (139.178.68.195:47734). May 15 23:45:23.501754 sshd[1861]: Accepted publickey for core from 139.178.68.195 port 47734 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:45:23.504245 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:45:23.511295 systemd-logind[1460]: New session 7 of user core. May 15 23:45:23.520652 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:45:24.030604 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:45:24.030916 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:45:24.341619 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:45:24.341662 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:45:24.578159 dockerd[1882]: time="2025-05-15T23:45:24.578070111Z" level=info msg="Starting up" May 15 23:45:24.660078 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport489367511-merged.mount: Deactivated successfully. May 15 23:45:24.683907 systemd[1]: var-lib-docker-metacopy\x2dcheck1995742385-merged.mount: Deactivated successfully. May 15 23:45:24.694199 dockerd[1882]: time="2025-05-15T23:45:24.693884968Z" level=info msg="Loading containers: start." May 15 23:45:24.848223 kernel: Initializing XFRM netlink socket May 15 23:45:24.933942 systemd-networkd[1382]: docker0: Link UP May 15 23:45:24.976753 dockerd[1882]: time="2025-05-15T23:45:24.976613325Z" level=info msg="Loading containers: done." May 15 23:45:24.993689 dockerd[1882]: time="2025-05-15T23:45:24.993216818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:45:24.993689 dockerd[1882]: time="2025-05-15T23:45:24.993328789Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 23:45:24.993689 dockerd[1882]: time="2025-05-15T23:45:24.993452201Z" level=info msg="Daemon has completed initialization" May 15 23:45:25.029289 dockerd[1882]: time="2025-05-15T23:45:25.029226797Z" level=info msg="API listen on /run/docker.sock" May 15 23:45:25.029642 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:45:26.055869 containerd[1485]: time="2025-05-15T23:45:26.055827195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 15 23:45:26.694837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149804573.mount: Deactivated successfully. May 15 23:45:27.629426 containerd[1485]: time="2025-05-15T23:45:27.629231498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:27.631120 containerd[1485]: time="2025-05-15T23:45:27.631022861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349442" May 15 23:45:27.631867 containerd[1485]: time="2025-05-15T23:45:27.631486104Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:27.635328 containerd[1485]: time="2025-05-15T23:45:27.635255248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:27.636690 containerd[1485]: time="2025-05-15T23:45:27.636642934Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 1.580694929s" May 15 23:45:27.636803 containerd[1485]: time="2025-05-15T23:45:27.636787308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 15 23:45:27.638898 containerd[1485]: time="2025-05-15T23:45:27.638605234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 15 23:45:28.781056 containerd[1485]: time="2025-05-15T23:45:28.780965250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:28.782364 containerd[1485]: time="2025-05-15T23:45:28.782313612Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531755" May 15 23:45:28.783132 containerd[1485]: time="2025-05-15T23:45:28.783064680Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:28.789723 containerd[1485]: time="2025-05-15T23:45:28.789602710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:28.791979 containerd[1485]: time="2025-05-15T23:45:28.791813070Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.153170553s" May 15 23:45:28.791979 containerd[1485]: time="2025-05-15T23:45:28.791864514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 15 23:45:28.792675 containerd[1485]: time="2025-05-15T23:45:28.792629023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 15 23:45:29.788346 containerd[1485]: time="2025-05-15T23:45:29.788276432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:29.789536 containerd[1485]: time="2025-05-15T23:45:29.789477179Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293751" May 15 23:45:29.790472 containerd[1485]: time="2025-05-15T23:45:29.790365539Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:29.793688 containerd[1485]: time="2025-05-15T23:45:29.793636991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:29.795077 containerd[1485]: time="2025-05-15T23:45:29.794927947Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.002156431s" May 15 23:45:29.795077 containerd[1485]: time="2025-05-15T23:45:29.794973191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 15 23:45:29.796936 containerd[1485]: time="2025-05-15T23:45:29.796900883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 15 23:45:30.724136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790579743.mount: Deactivated successfully. May 15 23:45:31.053388 containerd[1485]: time="2025-05-15T23:45:31.053148284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:31.055223 containerd[1485]: time="2025-05-15T23:45:31.055003046Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196030" May 15 23:45:31.056491 containerd[1485]: time="2025-05-15T23:45:31.056424571Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:31.062758 containerd[1485]: time="2025-05-15T23:45:31.062644516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:31.064300 containerd[1485]: time="2025-05-15T23:45:31.064035998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.267085511s" May 15 23:45:31.064300 containerd[1485]: time="2025-05-15T23:45:31.064096963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 15 23:45:31.065559 containerd[1485]: time="2025-05-15T23:45:31.065148776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 15 23:45:31.668635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123273312.mount: Deactivated successfully. May 15 23:45:32.073604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 15 23:45:32.081486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:32.215400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:32.223549 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:45:32.264055 kubelet[2200]: E0515 23:45:32.263904 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:45:32.266934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:45:32.267101 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:45:32.570777 containerd[1485]: time="2025-05-15T23:45:32.570561822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:32.572247 containerd[1485]: time="2025-05-15T23:45:32.571688320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" May 15 23:45:32.573437 containerd[1485]: time="2025-05-15T23:45:32.573378226Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:32.578201 containerd[1485]: time="2025-05-15T23:45:32.578137440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:32.579911 containerd[1485]: time="2025-05-15T23:45:32.579738739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.514512997s" May 15 23:45:32.579911 containerd[1485]: time="2025-05-15T23:45:32.579785663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 15 23:45:32.580545 containerd[1485]: time="2025-05-15T23:45:32.580304868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:45:33.107380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194325206.mount: Deactivated successfully. May 15 23:45:33.115169 containerd[1485]: time="2025-05-15T23:45:33.115097999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:33.116107 containerd[1485]: time="2025-05-15T23:45:33.116059001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 15 23:45:33.116937 containerd[1485]: time="2025-05-15T23:45:33.116864191Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:33.119863 containerd[1485]: time="2025-05-15T23:45:33.119756920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:33.120950 containerd[1485]: time="2025-05-15T23:45:33.120795809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.456218ms" May 15 23:45:33.120950 containerd[1485]: time="2025-05-15T23:45:33.120839173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 23:45:33.122159 containerd[1485]: time="2025-05-15T23:45:33.121832899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 15 23:45:35.020520 containerd[1485]: time="2025-05-15T23:45:35.020446631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:35.022522 containerd[1485]: time="2025-05-15T23:45:35.022482824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230195" May 15 23:45:35.023913 containerd[1485]: time="2025-05-15T23:45:35.023853791Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:35.028480 containerd[1485]: time="2025-05-15T23:45:35.028399018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:45:35.030090 containerd[1485]: time="2025-05-15T23:45:35.029915174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.908042074s" May 15 23:45:35.030090 containerd[1485]: time="2025-05-15T23:45:35.029960650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 15 23:45:40.780680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:40.796683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:40.830529 systemd[1]: Reloading requested from client PID 2251 ('systemctl') (unit session-7.scope)... May 15 23:45:40.830684 systemd[1]: Reloading... May 15 23:45:40.959213 zram_generator::config[2294]: No configuration found. May 15 23:45:41.056413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:45:41.137031 systemd[1]: Reloading finished in 305 ms. May 15 23:45:41.192467 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:45:41.192608 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:45:41.193045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:41.199734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:41.328512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:41.343745 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:45:41.387411 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:45:41.387411 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:45:41.387411 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:45:41.387779 kubelet[2339]: I0515 23:45:41.387483 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:45:42.044650 kubelet[2339]: I0515 23:45:42.044582 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 15 23:45:42.044650 kubelet[2339]: I0515 23:45:42.044620 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:45:42.045007 kubelet[2339]: I0515 23:45:42.044967 2339 server.go:956] "Client rotation is on, will bootstrap in background" May 15 23:45:42.074768 kubelet[2339]: E0515 23:45:42.074229 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.108.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 15 23:45:42.075683 kubelet[2339]: I0515 23:45:42.075658 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:45:42.084320 kubelet[2339]: E0515 23:45:42.084243 2339 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:45:42.084320 kubelet[2339]: I0515 23:45:42.084304 2339 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:45:42.087351 kubelet[2339]: I0515 23:45:42.087294 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:45:42.088909 kubelet[2339]: I0515 23:45:42.088842 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:45:42.089089 kubelet[2339]: I0515 23:45:42.088898 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-32b6392e63","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:45:42.089280 kubelet[2339]: I0515 23:45:42.089149 2339 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:45:42.089280 kubelet[2339]: I0515 23:45:42.089161 2339 container_manager_linux.go:303] "Creating device plugin manager" May 15 23:45:42.089439 kubelet[2339]: I0515 23:45:42.089388 2339 state_mem.go:36] "Initialized new in-memory state store" May 15 23:45:42.093379 kubelet[2339]: I0515 23:45:42.093321 2339 kubelet.go:480] "Attempting to sync node with API server" May 15 23:45:42.093379 kubelet[2339]: I0515 23:45:42.093353 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:45:42.093379 kubelet[2339]: I0515 23:45:42.093384 2339 kubelet.go:386] "Adding apiserver pod source" May 15 23:45:42.094430 kubelet[2339]: I0515 23:45:42.093400 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:45:42.099212 kubelet[2339]: E0515 23:45:42.099142 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.108.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 15 23:45:42.100701 kubelet[2339]: E0515 23:45:42.099800 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.108.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-32b6392e63&limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 15 23:45:42.100701 kubelet[2339]: I0515 23:45:42.099925 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:45:42.100701 kubelet[2339]: I0515 23:45:42.100672 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 15 23:45:42.100846 kubelet[2339]: W0515 23:45:42.100790 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:45:42.104700 kubelet[2339]: I0515 23:45:42.104472 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:45:42.104700 kubelet[2339]: I0515 23:45:42.104515 2339 server.go:1289] "Started kubelet" May 15 23:45:42.105029 kubelet[2339]: I0515 23:45:42.104999 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:45:42.109216 kubelet[2339]: I0515 23:45:42.108664 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:45:42.109216 kubelet[2339]: I0515 23:45:42.109045 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:45:42.110633 kubelet[2339]: I0515 23:45:42.110605 2339 server.go:317] "Adding debug handlers to kubelet server" May 15 23:45:42.115198 kubelet[2339]: I0515 23:45:42.115153 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:45:42.115558 kubelet[2339]: E0515 23:45:42.114004 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.108.125:6443/api/v1/namespaces/default/events\": dial tcp 168.119.108.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-n-32b6392e63.183fd80fa2715ecf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-n-32b6392e63,UID:ci-4152-2-3-n-32b6392e63,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-32b6392e63,},FirstTimestamp:2025-05-15 23:45:42.104489679 +0000 UTC m=+0.755613431,LastTimestamp:2025-05-15 23:45:42.104489679 +0000 UTC m=+0.755613431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-32b6392e63,}" May 15 23:45:42.116930 kubelet[2339]: I0515 23:45:42.116878 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:45:42.119611 kubelet[2339]: E0515 23:45:42.119572 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4152-2-3-n-32b6392e63\" not found" May 15 23:45:42.119693 kubelet[2339]: I0515 23:45:42.119650 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:45:42.119858 kubelet[2339]: I0515 23:45:42.119832 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:45:42.119910 kubelet[2339]: I0515 23:45:42.119891 2339 reconciler.go:26] "Reconciler: start to sync state" May 15 23:45:42.120642 kubelet[2339]: E0515 23:45:42.120601 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.108.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 15 23:45:42.120838 kubelet[2339]: I0515 23:45:42.120811 2339 factory.go:223] Registration of the systemd container factory successfully May 15 23:45:42.120910 kubelet[2339]: I0515 23:45:42.120891 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:45:42.121627 kubelet[2339]: E0515 23:45:42.121591 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:45:42.123063 kubelet[2339]: I0515 23:45:42.122957 2339 factory.go:223] Registration of the containerd container factory successfully May 15 23:45:42.141585 kubelet[2339]: I0515 23:45:42.141376 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 15 23:45:42.142780 kubelet[2339]: I0515 23:45:42.142642 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 15 23:45:42.142780 kubelet[2339]: I0515 23:45:42.142668 2339 status_manager.go:230] "Starting to sync pod status with apiserver" May 15 23:45:42.142780 kubelet[2339]: I0515 23:45:42.142705 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:45:42.142780 kubelet[2339]: I0515 23:45:42.142712 2339 kubelet.go:2436] "Starting kubelet main sync loop" May 15 23:45:42.142780 kubelet[2339]: E0515 23:45:42.142755 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:45:42.149011 kubelet[2339]: E0515 23:45:42.147693 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.108.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-32b6392e63?timeout=10s\": dial tcp 168.119.108.125:6443: connect: connection refused" interval="200ms" May 15 23:45:42.149011 kubelet[2339]: E0515 23:45:42.147991 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.108.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 15 23:45:42.154049 kubelet[2339]: I0515 23:45:42.154022 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:45:42.154275 kubelet[2339]: I0515 23:45:42.154258 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:45:42.154371 kubelet[2339]: I0515 23:45:42.154355 2339 state_mem.go:36] "Initialized new in-memory state store" May 15 23:45:42.156781 kubelet[2339]: I0515 23:45:42.156713 2339 policy_none.go:49] "None policy: Start" May 15 23:45:42.156781 kubelet[2339]: I0515 23:45:42.156742 2339 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:45:42.156781 kubelet[2339]: I0515 23:45:42.156755 2339 state_mem.go:35] "Initializing new in-memory state store" May 15 23:45:42.162424 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:45:42.175611 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:45:42.181830 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:45:42.195217 kubelet[2339]: E0515 23:45:42.194494 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 15 23:45:42.195217 kubelet[2339]: I0515 23:45:42.194788 2339 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:45:42.195217 kubelet[2339]: I0515 23:45:42.194805 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:45:42.196401 kubelet[2339]: I0515 23:45:42.196373 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:45:42.197980 kubelet[2339]: E0515 23:45:42.197959 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:45:42.198249 kubelet[2339]: E0515 23:45:42.198217 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-n-32b6392e63\" not found" May 15 23:45:42.257586 systemd[1]: Created slice kubepods-burstable-pod25310ed763ce8c54e43aa2cd3eae5ae6.slice - libcontainer container kubepods-burstable-pod25310ed763ce8c54e43aa2cd3eae5ae6.slice. May 15 23:45:42.268649 kubelet[2339]: E0515 23:45:42.268586 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.276319 systemd[1]: Created slice kubepods-burstable-pode416a745011ad96e8f8201026235d26a.slice - libcontainer container kubepods-burstable-pode416a745011ad96e8f8201026235d26a.slice. May 15 23:45:42.279067 kubelet[2339]: E0515 23:45:42.279030 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.280942 systemd[1]: Created slice kubepods-burstable-pod1973bad191ba03b759be2e7e80c247f7.slice - libcontainer container kubepods-burstable-pod1973bad191ba03b759be2e7e80c247f7.slice. May 15 23:45:42.283307 kubelet[2339]: E0515 23:45:42.283280 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.297844 kubelet[2339]: I0515 23:45:42.297705 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.298996 kubelet[2339]: E0515 23:45:42.298934 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.108.125:6443/api/v1/nodes\": dial tcp 168.119.108.125:6443: connect: connection refused" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.320491 kubelet[2339]: I0515 23:45:42.320375 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321053 kubelet[2339]: I0515 23:45:42.320690 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321053 kubelet[2339]: I0515 23:45:42.320788 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321053 kubelet[2339]: I0515 23:45:42.320831 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1973bad191ba03b759be2e7e80c247f7-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-32b6392e63\" (UID: \"1973bad191ba03b759be2e7e80c247f7\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321053 kubelet[2339]: I0515 23:45:42.320863 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321053 kubelet[2339]: I0515 23:45:42.320890 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321330 kubelet[2339]: I0515 23:45:42.320936 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321330 kubelet[2339]: I0515 23:45:42.320962 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.321330 kubelet[2339]: I0515 23:45:42.320986 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:42.350309 kubelet[2339]: E0515 23:45:42.350233 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.108.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-32b6392e63?timeout=10s\": dial tcp 168.119.108.125:6443: connect: connection refused" interval="400ms" May 15 23:45:42.501382 kubelet[2339]: I0515 23:45:42.501322 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.501821 kubelet[2339]: E0515 23:45:42.501765 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.108.125:6443/api/v1/nodes\": dial tcp 168.119.108.125:6443: connect: connection refused" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.572027 containerd[1485]: time="2025-05-15T23:45:42.570950288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-32b6392e63,Uid:25310ed763ce8c54e43aa2cd3eae5ae6,Namespace:kube-system,Attempt:0,}" May 15 23:45:42.581560 containerd[1485]: time="2025-05-15T23:45:42.581441040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-32b6392e63,Uid:e416a745011ad96e8f8201026235d26a,Namespace:kube-system,Attempt:0,}" May 15 23:45:42.584502 containerd[1485]: time="2025-05-15T23:45:42.584444557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-32b6392e63,Uid:1973bad191ba03b759be2e7e80c247f7,Namespace:kube-system,Attempt:0,}" May 15 23:45:42.751712 kubelet[2339]: E0515 23:45:42.751653 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.108.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-n-32b6392e63?timeout=10s\": dial tcp 168.119.108.125:6443: connect: connection refused" interval="800ms" May 15 23:45:42.904535 kubelet[2339]: I0515 23:45:42.904376 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:42.904998 kubelet[2339]: E0515 23:45:42.904858 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.108.125:6443/api/v1/nodes\": dial tcp 168.119.108.125:6443: connect: connection refused" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:43.014033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95154440.mount: Deactivated successfully. May 15 23:45:43.021734 containerd[1485]: time="2025-05-15T23:45:43.021615624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:45:43.023767 containerd[1485]: time="2025-05-15T23:45:43.023704839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 15 23:45:43.027666 containerd[1485]: time="2025-05-15T23:45:43.027574442Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:45:43.029612 containerd[1485]: time="2025-05-15T23:45:43.029477106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:45:43.031847 containerd[1485]: time="2025-05-15T23:45:43.031805108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:45:43.032465 containerd[1485]: time="2025-05-15T23:45:43.032364760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:45:43.033065 containerd[1485]: time="2025-05-15T23:45:43.032602188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:45:43.036480 containerd[1485]: time="2025-05-15T23:45:43.036438033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:45:43.037424 containerd[1485]: time="2025-05-15T23:45:43.037393985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 452.850953ms" May 15 23:45:43.039008 containerd[1485]: time="2025-05-15T23:45:43.038971585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 467.917022ms" May 15 23:45:43.039716 containerd[1485]: time="2025-05-15T23:45:43.039431042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.882648ms" May 15 23:45:43.169695 containerd[1485]: time="2025-05-15T23:45:43.169347817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:43.169695 containerd[1485]: time="2025-05-15T23:45:43.169422934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:43.169695 containerd[1485]: time="2025-05-15T23:45:43.169444332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.169695 containerd[1485]: time="2025-05-15T23:45:43.169522448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.172562 containerd[1485]: time="2025-05-15T23:45:43.172242911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:43.172562 containerd[1485]: time="2025-05-15T23:45:43.172315867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:43.172562 containerd[1485]: time="2025-05-15T23:45:43.172388943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.172769 containerd[1485]: time="2025-05-15T23:45:43.172544855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:43.172769 containerd[1485]: time="2025-05-15T23:45:43.172605852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:43.172769 containerd[1485]: time="2025-05-15T23:45:43.172621491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.172981 containerd[1485]: time="2025-05-15T23:45:43.172919596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.176113 containerd[1485]: time="2025-05-15T23:45:43.172818881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:43.207418 systemd[1]: Started cri-containerd-481c4b32aafb8ad5c8b17e392ad53de16e45f20d8ca02dc4bfe0fa02b2dff2bc.scope - libcontainer container 481c4b32aafb8ad5c8b17e392ad53de16e45f20d8ca02dc4bfe0fa02b2dff2bc. May 15 23:45:43.208665 systemd[1]: Started cri-containerd-d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958.scope - libcontainer container d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958. May 15 23:45:43.215510 systemd[1]: Started cri-containerd-37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a.scope - libcontainer container 37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a. May 15 23:45:43.223689 kubelet[2339]: E0515 23:45:43.222980 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.108.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 15 23:45:43.271120 containerd[1485]: time="2025-05-15T23:45:43.271076822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-n-32b6392e63,Uid:e416a745011ad96e8f8201026235d26a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958\"" May 15 23:45:43.281347 containerd[1485]: time="2025-05-15T23:45:43.280829127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-n-32b6392e63,Uid:25310ed763ce8c54e43aa2cd3eae5ae6,Namespace:kube-system,Attempt:0,} returns sandbox id \"481c4b32aafb8ad5c8b17e392ad53de16e45f20d8ca02dc4bfe0fa02b2dff2bc\"" May 15 23:45:43.283691 containerd[1485]: time="2025-05-15T23:45:43.283475273Z" level=info msg="CreateContainer within sandbox \"d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:45:43.288836 containerd[1485]: time="2025-05-15T23:45:43.288794284Z" level=info msg="CreateContainer within sandbox \"481c4b32aafb8ad5c8b17e392ad53de16e45f20d8ca02dc4bfe0fa02b2dff2bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:45:43.294678 containerd[1485]: time="2025-05-15T23:45:43.294635388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-n-32b6392e63,Uid:1973bad191ba03b759be2e7e80c247f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a\"" May 15 23:45:43.302915 containerd[1485]: time="2025-05-15T23:45:43.302850731Z" level=info msg="CreateContainer within sandbox \"37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:45:43.307978 containerd[1485]: time="2025-05-15T23:45:43.307592171Z" level=info msg="CreateContainer within sandbox \"d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92\"" May 15 23:45:43.310349 containerd[1485]: time="2025-05-15T23:45:43.309549192Z" level=info msg="StartContainer for \"d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92\"" May 15 23:45:43.314568 containerd[1485]: time="2025-05-15T23:45:43.313928010Z" level=info msg="CreateContainer within sandbox \"481c4b32aafb8ad5c8b17e392ad53de16e45f20d8ca02dc4bfe0fa02b2dff2bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51b7d6c06992a5780520a9efe6f17c433a487aeeea1160093f1f281c41b981d9\"" May 15 23:45:43.315402 containerd[1485]: time="2025-05-15T23:45:43.315362297Z" level=info msg="StartContainer for \"51b7d6c06992a5780520a9efe6f17c433a487aeeea1160093f1f281c41b981d9\"" May 15 23:45:43.330600 containerd[1485]: time="2025-05-15T23:45:43.330359057Z" level=info msg="CreateContainer within sandbox \"37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7\"" May 15 23:45:43.331727 containerd[1485]: time="2025-05-15T23:45:43.331681670Z" level=info msg="StartContainer for \"45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7\"" May 15 23:45:43.350461 systemd[1]: Started cri-containerd-d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92.scope - libcontainer container d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92. May 15 23:45:43.351395 kubelet[2339]: E0515 23:45:43.351169 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.108.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-n-32b6392e63&limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 15 23:45:43.361764 systemd[1]: Started cri-containerd-51b7d6c06992a5780520a9efe6f17c433a487aeeea1160093f1f281c41b981d9.scope - libcontainer container 51b7d6c06992a5780520a9efe6f17c433a487aeeea1160093f1f281c41b981d9. May 15 23:45:43.383702 systemd[1]: Started cri-containerd-45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7.scope - libcontainer container 45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7. May 15 23:45:43.410059 kubelet[2339]: E0515 23:45:43.409274 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.108.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.108.125:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 15 23:45:43.439309 containerd[1485]: time="2025-05-15T23:45:43.437978203Z" level=info msg="StartContainer for \"45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7\" returns successfully" May 15 23:45:43.439309 containerd[1485]: time="2025-05-15T23:45:43.438117076Z" level=info msg="StartContainer for \"51b7d6c06992a5780520a9efe6f17c433a487aeeea1160093f1f281c41b981d9\" returns successfully" May 15 23:45:43.447250 containerd[1485]: time="2025-05-15T23:45:43.447137899Z" level=info msg="StartContainer for \"d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92\" returns successfully" May 15 23:45:43.706990 kubelet[2339]: I0515 23:45:43.706872 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:44.162673 kubelet[2339]: E0515 23:45:44.162604 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:44.166021 kubelet[2339]: E0515 23:45:44.165744 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:44.168352 kubelet[2339]: E0515 23:45:44.168321 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:45.171061 kubelet[2339]: E0515 23:45:45.170861 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:45.171061 kubelet[2339]: E0515 23:45:45.170997 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:46.467247 kubelet[2339]: E0515 23:45:46.467204 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-3-n-32b6392e63\" not found" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:46.603850 kubelet[2339]: I0515 23:45:46.603795 2339 kubelet_node_status.go:78] "Successfully registered node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:46.603850 kubelet[2339]: E0515 23:45:46.603849 2339 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4152-2-3-n-32b6392e63\": node \"ci-4152-2-3-n-32b6392e63\" not found" May 15 23:45:46.622950 kubelet[2339]: I0515 23:45:46.622916 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:46.642104 kubelet[2339]: E0515 23:45:46.642066 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:46.642104 kubelet[2339]: I0515 23:45:46.642098 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:46.646632 kubelet[2339]: E0515 23:45:46.646596 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:46.646632 kubelet[2339]: I0515 23:45:46.646631 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:46.650215 kubelet[2339]: E0515 23:45:46.650171 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-3-n-32b6392e63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:47.096852 kubelet[2339]: I0515 23:45:47.096788 2339 apiserver.go:52] "Watching apiserver" May 15 23:45:47.121099 kubelet[2339]: I0515 23:45:47.121045 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:45:47.790686 kubelet[2339]: I0515 23:45:47.790644 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:47.802695 kubelet[2339]: I0515 23:45:47.802643 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:48.794397 systemd[1]: Reloading requested from client PID 2634 ('systemctl') (unit session-7.scope)... May 15 23:45:48.794425 systemd[1]: Reloading... May 15 23:45:48.910290 zram_generator::config[2674]: No configuration found. May 15 23:45:49.017917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:45:49.117159 systemd[1]: Reloading finished in 322 ms. May 15 23:45:49.160813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:49.180914 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:45:49.181376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:49.181463 systemd[1]: kubelet.service: Consumed 1.205s CPU time, 127.4M memory peak, 0B memory swap peak. May 15 23:45:49.188526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:45:49.320837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:45:49.328656 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:45:49.378759 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:45:49.378759 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:45:49.378759 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:45:49.378759 kubelet[2719]: I0515 23:45:49.378416 2719 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:45:49.390878 kubelet[2719]: I0515 23:45:49.390829 2719 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 15 23:45:49.390878 kubelet[2719]: I0515 23:45:49.390862 2719 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:45:49.391274 kubelet[2719]: I0515 23:45:49.391115 2719 server.go:956] "Client rotation is on, will bootstrap in background" May 15 23:45:49.393347 kubelet[2719]: I0515 23:45:49.392712 2719 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 15 23:45:49.395198 kubelet[2719]: I0515 23:45:49.395130 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:45:49.401039 kubelet[2719]: E0515 23:45:49.399972 2719 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:45:49.401039 kubelet[2719]: I0515 23:45:49.400007 2719 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:45:49.402700 kubelet[2719]: I0515 23:45:49.402632 2719 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:45:49.402902 kubelet[2719]: I0515 23:45:49.402868 2719 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:45:49.403265 kubelet[2719]: I0515 23:45:49.402904 2719 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-n-32b6392e63","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:45:49.403265 kubelet[2719]: I0515 23:45:49.403168 2719 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:45:49.403265 kubelet[2719]: I0515 23:45:49.403179 2719 container_manager_linux.go:303] "Creating device plugin manager" May 15 23:45:49.403265 kubelet[2719]: I0515 23:45:49.403247 2719 state_mem.go:36] "Initialized new in-memory state store" May 15 23:45:49.403463 kubelet[2719]: I0515 23:45:49.403441 2719 kubelet.go:480] "Attempting to sync node with API server" May 15 23:45:49.403463 kubelet[2719]: I0515 23:45:49.403459 2719 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:45:49.403507 kubelet[2719]: I0515 23:45:49.403484 2719 kubelet.go:386] "Adding apiserver pod source" May 15 23:45:49.403507 kubelet[2719]: I0515 23:45:49.403502 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:45:49.413132 kubelet[2719]: I0515 23:45:49.413060 2719 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:45:49.416284 kubelet[2719]: I0515 23:45:49.416232 2719 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 15 23:45:49.423545 kubelet[2719]: I0515 23:45:49.423517 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:45:49.423807 kubelet[2719]: I0515 23:45:49.423792 2719 server.go:1289] "Started kubelet" May 15 23:45:49.425864 kubelet[2719]: I0515 23:45:49.425838 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:45:49.436731 kubelet[2719]: I0515 23:45:49.436662 2719 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:45:49.437975 kubelet[2719]: I0515 23:45:49.437675 2719 server.go:317] "Adding debug handlers to kubelet server" May 15 23:45:49.439501 kubelet[2719]: I0515 23:45:49.439475 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:45:49.442128 kubelet[2719]: I0515 23:45:49.441947 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:45:49.442308 kubelet[2719]: I0515 23:45:49.442168 2719 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:45:49.443062 kubelet[2719]: I0515 23:45:49.442450 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:45:49.443062 kubelet[2719]: I0515 23:45:49.442545 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:45:49.443062 kubelet[2719]: I0515 23:45:49.442665 2719 reconciler.go:26] "Reconciler: start to sync state" May 15 23:45:49.444777 kubelet[2719]: I0515 23:45:49.444709 2719 factory.go:223] Registration of the systemd container factory successfully May 15 23:45:49.444861 kubelet[2719]: I0515 23:45:49.444840 2719 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:45:49.447579 kubelet[2719]: I0515 23:45:49.447047 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 15 23:45:49.448133 kubelet[2719]: I0515 23:45:49.448110 2719 factory.go:223] Registration of the containerd container factory successfully May 15 23:45:49.449372 kubelet[2719]: I0515 23:45:49.449348 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 15 23:45:49.449478 kubelet[2719]: I0515 23:45:49.449468 2719 status_manager.go:230] "Starting to sync pod status with apiserver" May 15 23:45:49.449572 kubelet[2719]: I0515 23:45:49.449559 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:45:49.449627 kubelet[2719]: I0515 23:45:49.449619 2719 kubelet.go:2436] "Starting kubelet main sync loop" May 15 23:45:49.449747 kubelet[2719]: E0515 23:45:49.449707 2719 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:45:49.473322 kubelet[2719]: E0515 23:45:49.473280 2719 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:45:49.511597 kubelet[2719]: I0515 23:45:49.511560 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.511841 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.511882 2719 state_mem.go:36] "Initialized new in-memory state store" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512028 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512040 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512058 2719 policy_none.go:49] "None policy: Start" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512069 2719 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512078 2719 state_mem.go:35] "Initializing new in-memory state store" May 15 23:45:49.512794 kubelet[2719]: I0515 23:45:49.512162 2719 state_mem.go:75] "Updated machine memory state" May 15 23:45:49.516645 kubelet[2719]: E0515 23:45:49.516617 2719 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 15 23:45:49.517086 kubelet[2719]: I0515 23:45:49.517000 2719 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:45:49.517086 kubelet[2719]: I0515 23:45:49.517022 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:45:49.517523 kubelet[2719]: I0515 23:45:49.517494 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:45:49.518965 kubelet[2719]: E0515 23:45:49.518944 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:45:49.551807 kubelet[2719]: I0515 23:45:49.551427 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.552096 kubelet[2719]: I0515 23:45:49.552063 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.553032 kubelet[2719]: I0515 23:45:49.552999 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.560257 kubelet[2719]: E0515 23:45:49.560133 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-3-n-32b6392e63\" already exists" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.561172 kubelet[2719]: E0515 23:45:49.561040 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.628346 kubelet[2719]: I0515 23:45:49.628311 2719 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:49.638141 kubelet[2719]: I0515 23:45:49.637910 2719 kubelet_node_status.go:124] "Node was previously registered" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:49.638141 kubelet[2719]: I0515 23:45:49.638004 2719 kubelet_node_status.go:78] "Successfully registered node" node="ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643653 kubelet[2719]: I0515 23:45:49.643341 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643653 kubelet[2719]: I0515 23:45:49.643376 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643653 kubelet[2719]: I0515 23:45:49.643397 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643653 kubelet[2719]: I0515 23:45:49.643414 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643653 kubelet[2719]: I0515 23:45:49.643437 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643916 kubelet[2719]: I0515 23:45:49.643452 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25310ed763ce8c54e43aa2cd3eae5ae6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" (UID: \"25310ed763ce8c54e43aa2cd3eae5ae6\") " pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643916 kubelet[2719]: I0515 23:45:49.643466 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643916 kubelet[2719]: I0515 23:45:49.643480 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e416a745011ad96e8f8201026235d26a-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" (UID: \"e416a745011ad96e8f8201026235d26a\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.643916 kubelet[2719]: I0515 23:45:49.643494 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1973bad191ba03b759be2e7e80c247f7-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-n-32b6392e63\" (UID: \"1973bad191ba03b759be2e7e80c247f7\") " pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:49.794462 sudo[2757]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:45:49.794787 sudo[2757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:45:50.250229 sudo[2757]: pam_unix(sudo:session): session closed for user root May 15 23:45:50.405857 kubelet[2719]: I0515 23:45:50.405765 2719 apiserver.go:52] "Watching apiserver" May 15 23:45:50.442994 kubelet[2719]: I0515 23:45:50.442887 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:45:50.489238 kubelet[2719]: I0515 23:45:50.488885 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.490741 kubelet[2719]: I0515 23:45:50.490639 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.492412 kubelet[2719]: I0515 23:45:50.492339 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.505361 kubelet[2719]: E0515 23:45:50.505053 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-3-n-32b6392e63\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.508218 kubelet[2719]: E0515 23:45:50.507820 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-3-n-32b6392e63\" already exists" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.509915 kubelet[2719]: E0515 23:45:50.509493 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152-2-3-n-32b6392e63\" already exists" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" May 15 23:45:50.552697 kubelet[2719]: I0515 23:45:50.552202 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-n-32b6392e63" podStartSLOduration=1.55217329 podStartE2EDuration="1.55217329s" podCreationTimestamp="2025-05-15 23:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:45:50.539937246 +0000 UTC m=+1.206068736" watchObservedRunningTime="2025-05-15 23:45:50.55217329 +0000 UTC m=+1.218304780" May 15 23:45:50.563723 kubelet[2719]: I0515 23:45:50.562454 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-n-32b6392e63" podStartSLOduration=3.562431512 podStartE2EDuration="3.562431512s" podCreationTimestamp="2025-05-15 23:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:45:50.553364896 +0000 UTC m=+1.219496426" watchObservedRunningTime="2025-05-15 23:45:50.562431512 +0000 UTC m=+1.228563042" May 15 23:45:50.564607 kubelet[2719]: I0515 23:45:50.564529 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-n-32b6392e63" podStartSLOduration=3.564509132 podStartE2EDuration="3.564509132s" podCreationTimestamp="2025-05-15 23:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:45:50.563661116 +0000 UTC m=+1.229792646" watchObservedRunningTime="2025-05-15 23:45:50.564509132 +0000 UTC m=+1.230640662" May 15 23:45:52.345806 sudo[1864]: pam_unix(sudo:session): session closed for user root May 15 23:45:52.507261 sshd[1863]: Connection closed by 139.178.68.195 port 47734 May 15 23:45:52.508099 sshd-session[1861]: pam_unix(sshd:session): session closed for user core May 15 23:45:52.513699 systemd[1]: sshd@7-168.119.108.125:22-139.178.68.195:47734.service: Deactivated successfully. May 15 23:45:52.515811 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:45:52.516037 systemd[1]: session-7.scope: Consumed 8.293s CPU time, 157.6M memory peak, 0B memory swap peak. May 15 23:45:52.516855 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. May 15 23:45:52.518232 systemd-logind[1460]: Removed session 7. May 15 23:45:55.391415 kubelet[2719]: I0515 23:45:55.391340 2719 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:45:55.393087 kubelet[2719]: I0515 23:45:55.392390 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:45:55.393133 containerd[1485]: time="2025-05-15T23:45:55.392114693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:45:56.415900 systemd[1]: Created slice kubepods-besteffort-podd462ba02_bff7_4da0_98ef_8e3431dc3124.slice - libcontainer container kubepods-besteffort-podd462ba02_bff7_4da0_98ef_8e3431dc3124.slice. May 15 23:45:56.432075 systemd[1]: Created slice kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice - libcontainer container kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice. May 15 23:45:56.494457 kubelet[2719]: I0515 23:45:56.494198 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-cgroup\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.494457 kubelet[2719]: I0515 23:45:56.494241 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-etc-cni-netd\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.494457 kubelet[2719]: I0515 23:45:56.494262 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-lib-modules\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.494934 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-config-path\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.494995 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-hubble-tls\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.495038 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d462ba02-bff7-4da0-98ef-8e3431dc3124-kube-proxy\") pod \"kube-proxy-6wqmz\" (UID: \"d462ba02-bff7-4da0-98ef-8e3431dc3124\") " pod="kube-system/kube-proxy-6wqmz" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.495059 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-hostproc\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.495081 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-xtables-lock\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496034 kubelet[2719]: I0515 23:45:56.495106 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-kernel\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495132 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d462ba02-bff7-4da0-98ef-8e3431dc3124-lib-modules\") pod \"kube-proxy-6wqmz\" (UID: \"d462ba02-bff7-4da0-98ef-8e3431dc3124\") " pod="kube-system/kube-proxy-6wqmz" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495153 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtc8\" (UniqueName: \"kubernetes.io/projected/d462ba02-bff7-4da0-98ef-8e3431dc3124-kube-api-access-rhtc8\") pod \"kube-proxy-6wqmz\" (UID: \"d462ba02-bff7-4da0-98ef-8e3431dc3124\") " pod="kube-system/kube-proxy-6wqmz" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495606 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cni-path\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495674 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-net\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495700 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-run\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496374 kubelet[2719]: I0515 23:45:56.495740 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-bpf-maps\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496585 kubelet[2719]: I0515 23:45:56.495763 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a33323a-c06b-4d74-96b7-b79400cedddf-clustermesh-secrets\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496585 kubelet[2719]: I0515 23:45:56.495786 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz978\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-kube-api-access-zz978\") pod \"cilium-thnx7\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " pod="kube-system/cilium-thnx7" May 15 23:45:56.496585 kubelet[2719]: I0515 23:45:56.495863 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d462ba02-bff7-4da0-98ef-8e3431dc3124-xtables-lock\") pod \"kube-proxy-6wqmz\" (UID: \"d462ba02-bff7-4da0-98ef-8e3431dc3124\") " pod="kube-system/kube-proxy-6wqmz" May 15 23:45:56.632079 systemd[1]: Created slice kubepods-besteffort-poda1e3c368_2bbd_49ab_89af_a59c5c0cff19.slice - libcontainer container kubepods-besteffort-poda1e3c368_2bbd_49ab_89af_a59c5c0cff19.slice. May 15 23:45:56.698764 kubelet[2719]: I0515 23:45:56.698451 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vv29b\" (UID: \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\") " pod="kube-system/cilium-operator-6c4d7847fc-vv29b" May 15 23:45:56.698764 kubelet[2719]: I0515 23:45:56.698580 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbx6m\" (UniqueName: \"kubernetes.io/projected/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-kube-api-access-cbx6m\") pod \"cilium-operator-6c4d7847fc-vv29b\" (UID: \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\") " pod="kube-system/cilium-operator-6c4d7847fc-vv29b" May 15 23:45:56.729093 containerd[1485]: time="2025-05-15T23:45:56.728947638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wqmz,Uid:d462ba02-bff7-4da0-98ef-8e3431dc3124,Namespace:kube-system,Attempt:0,}" May 15 23:45:56.739223 containerd[1485]: time="2025-05-15T23:45:56.738869059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thnx7,Uid:6a33323a-c06b-4d74-96b7-b79400cedddf,Namespace:kube-system,Attempt:0,}" May 15 23:45:56.758253 containerd[1485]: time="2025-05-15T23:45:56.756591170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:56.758253 containerd[1485]: time="2025-05-15T23:45:56.756797847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:56.758253 containerd[1485]: time="2025-05-15T23:45:56.756876126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:56.758467 containerd[1485]: time="2025-05-15T23:45:56.758406985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:56.773073 containerd[1485]: time="2025-05-15T23:45:56.772484787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:56.773073 containerd[1485]: time="2025-05-15T23:45:56.772697584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:56.773073 containerd[1485]: time="2025-05-15T23:45:56.772712584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:56.773468 containerd[1485]: time="2025-05-15T23:45:56.772878061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:56.781423 systemd[1]: Started cri-containerd-d7c597ecab81c4b602dcca9bf92c301df451878eecbd500b2dbd0fc23fd19d77.scope - libcontainer container d7c597ecab81c4b602dcca9bf92c301df451878eecbd500b2dbd0fc23fd19d77. May 15 23:45:56.803588 systemd[1]: Started cri-containerd-5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f.scope - libcontainer container 5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f. May 15 23:45:56.825959 containerd[1485]: time="2025-05-15T23:45:56.825919517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wqmz,Uid:d462ba02-bff7-4da0-98ef-8e3431dc3124,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7c597ecab81c4b602dcca9bf92c301df451878eecbd500b2dbd0fc23fd19d77\"" May 15 23:45:56.834405 containerd[1485]: time="2025-05-15T23:45:56.834237160Z" level=info msg="CreateContainer within sandbox \"d7c597ecab81c4b602dcca9bf92c301df451878eecbd500b2dbd0fc23fd19d77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:45:56.847511 containerd[1485]: time="2025-05-15T23:45:56.847439215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thnx7,Uid:6a33323a-c06b-4d74-96b7-b79400cedddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\"" May 15 23:45:56.850245 containerd[1485]: time="2025-05-15T23:45:56.850116177Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:45:56.859219 containerd[1485]: time="2025-05-15T23:45:56.857481354Z" level=info msg="CreateContainer within sandbox \"d7c597ecab81c4b602dcca9bf92c301df451878eecbd500b2dbd0fc23fd19d77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c529156d871fd72fdaa63d519b2ba4be2fe38a781b884b9ccc6da992ce021c6\"" May 15 23:45:56.863087 containerd[1485]: time="2025-05-15T23:45:56.863025716Z" level=info msg="StartContainer for \"5c529156d871fd72fdaa63d519b2ba4be2fe38a781b884b9ccc6da992ce021c6\"" May 15 23:45:56.893414 systemd[1]: Started cri-containerd-5c529156d871fd72fdaa63d519b2ba4be2fe38a781b884b9ccc6da992ce021c6.scope - libcontainer container 5c529156d871fd72fdaa63d519b2ba4be2fe38a781b884b9ccc6da992ce021c6. May 15 23:45:56.931969 containerd[1485]: time="2025-05-15T23:45:56.931711992Z" level=info msg="StartContainer for \"5c529156d871fd72fdaa63d519b2ba4be2fe38a781b884b9ccc6da992ce021c6\" returns successfully" May 15 23:45:56.938246 containerd[1485]: time="2025-05-15T23:45:56.937574350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vv29b,Uid:a1e3c368-2bbd-49ab-89af-a59c5c0cff19,Namespace:kube-system,Attempt:0,}" May 15 23:45:56.972562 containerd[1485]: time="2025-05-15T23:45:56.972384061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:45:56.973423 containerd[1485]: time="2025-05-15T23:45:56.973016892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:45:56.973423 containerd[1485]: time="2025-05-15T23:45:56.973076651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:56.973682 containerd[1485]: time="2025-05-15T23:45:56.973404207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:45:57.003415 systemd[1]: Started cri-containerd-be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462.scope - libcontainer container be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462. May 15 23:45:57.043927 containerd[1485]: time="2025-05-15T23:45:57.043864633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vv29b,Uid:a1e3c368-2bbd-49ab-89af-a59c5c0cff19,Namespace:kube-system,Attempt:0,} returns sandbox id \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\"" May 15 23:45:59.341904 kubelet[2719]: I0515 23:45:59.341785 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6wqmz" podStartSLOduration=3.341768434 podStartE2EDuration="3.341768434s" podCreationTimestamp="2025-05-15 23:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:45:57.519009428 +0000 UTC m=+8.185140958" watchObservedRunningTime="2025-05-15 23:45:59.341768434 +0000 UTC m=+10.007899924" May 15 23:46:01.310082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58859828.mount: Deactivated successfully. May 15 23:46:02.673464 containerd[1485]: time="2025-05-15T23:46:02.673385324Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:46:02.674856 containerd[1485]: time="2025-05-15T23:46:02.674800161Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 23:46:02.675684 containerd[1485]: time="2025-05-15T23:46:02.675413160Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:46:02.677967 containerd[1485]: time="2025-05-15T23:46:02.677269557Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.827057662s" May 15 23:46:02.677967 containerd[1485]: time="2025-05-15T23:46:02.677313077Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 23:46:02.681773 containerd[1485]: time="2025-05-15T23:46:02.681573511Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:46:02.685532 containerd[1485]: time="2025-05-15T23:46:02.685492504Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:46:02.701600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098038940.mount: Deactivated successfully. May 15 23:46:02.711806 containerd[1485]: time="2025-05-15T23:46:02.711737742Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\"" May 15 23:46:02.713017 containerd[1485]: time="2025-05-15T23:46:02.712903861Z" level=info msg="StartContainer for \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\"" May 15 23:46:02.749441 systemd[1]: Started cri-containerd-04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73.scope - libcontainer container 04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73. May 15 23:46:02.790210 containerd[1485]: time="2025-05-15T23:46:02.790037337Z" level=info msg="StartContainer for \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\" returns successfully" May 15 23:46:02.808481 systemd[1]: cri-containerd-04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73.scope: Deactivated successfully. May 15 23:46:02.951024 containerd[1485]: time="2025-05-15T23:46:02.950564881Z" level=info msg="shim disconnected" id=04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73 namespace=k8s.io May 15 23:46:02.951024 containerd[1485]: time="2025-05-15T23:46:02.950704760Z" level=warning msg="cleaning up after shim disconnected" id=04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73 namespace=k8s.io May 15 23:46:02.951024 containerd[1485]: time="2025-05-15T23:46:02.950716360Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:46:03.534255 containerd[1485]: time="2025-05-15T23:46:03.534113935Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:46:03.548289 containerd[1485]: time="2025-05-15T23:46:03.546085178Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\"" May 15 23:46:03.548829 containerd[1485]: time="2025-05-15T23:46:03.548773059Z" level=info msg="StartContainer for \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\"" May 15 23:46:03.587419 systemd[1]: Started cri-containerd-e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83.scope - libcontainer container e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83. May 15 23:46:03.619250 containerd[1485]: time="2025-05-15T23:46:03.618976276Z" level=info msg="StartContainer for \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\" returns successfully" May 15 23:46:03.633544 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:46:03.634442 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:46:03.634586 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:46:03.643407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:46:03.643685 systemd[1]: cri-containerd-e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83.scope: Deactivated successfully. May 15 23:46:03.665460 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:46:03.679119 containerd[1485]: time="2025-05-15T23:46:03.679037932Z" level=info msg="shim disconnected" id=e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83 namespace=k8s.io May 15 23:46:03.679119 containerd[1485]: time="2025-05-15T23:46:03.679105212Z" level=warning msg="cleaning up after shim disconnected" id=e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83 namespace=k8s.io May 15 23:46:03.679119 containerd[1485]: time="2025-05-15T23:46:03.679116652Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:46:03.691883 containerd[1485]: time="2025-05-15T23:46:03.691829015Z" level=warning msg="cleanup warnings time=\"2025-05-15T23:46:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 23:46:03.696995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73-rootfs.mount: Deactivated successfully. May 15 23:46:04.539576 containerd[1485]: time="2025-05-15T23:46:04.539080355Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:46:04.559458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587183072.mount: Deactivated successfully. May 15 23:46:04.564704 containerd[1485]: time="2025-05-15T23:46:04.564654207Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\"" May 15 23:46:04.566169 containerd[1485]: time="2025-05-15T23:46:04.566100290Z" level=info msg="StartContainer for \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\"" May 15 23:46:04.605769 systemd[1]: Started cri-containerd-86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4.scope - libcontainer container 86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4. May 15 23:46:04.637514 containerd[1485]: time="2025-05-15T23:46:04.637447516Z" level=info msg="StartContainer for \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\" returns successfully" May 15 23:46:04.643078 systemd[1]: cri-containerd-86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4.scope: Deactivated successfully. May 15 23:46:04.677893 containerd[1485]: time="2025-05-15T23:46:04.677792919Z" level=info msg="shim disconnected" id=86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4 namespace=k8s.io May 15 23:46:04.677893 containerd[1485]: time="2025-05-15T23:46:04.677854479Z" level=warning msg="cleaning up after shim disconnected" id=86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4 namespace=k8s.io May 15 23:46:04.677893 containerd[1485]: time="2025-05-15T23:46:04.677864599Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:46:04.697324 systemd[1]: run-containerd-runc-k8s.io-86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4-runc.g6Nx8N.mount: Deactivated successfully. May 15 23:46:04.697481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4-rootfs.mount: Deactivated successfully. May 15 23:46:04.816694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896692988.mount: Deactivated successfully. May 15 23:46:05.570220 containerd[1485]: time="2025-05-15T23:46:05.569630254Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:46:05.609151 containerd[1485]: time="2025-05-15T23:46:05.608475081Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\"" May 15 23:46:05.612262 containerd[1485]: time="2025-05-15T23:46:05.612171055Z" level=info msg="StartContainer for \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\"" May 15 23:46:05.649612 systemd[1]: Started cri-containerd-b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490.scope - libcontainer container b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490. May 15 23:46:05.680459 systemd[1]: cri-containerd-b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490.scope: Deactivated successfully. May 15 23:46:05.686628 containerd[1485]: time="2025-05-15T23:46:05.686162655Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice/cri-containerd-b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490.scope/memory.events\": no such file or directory" May 15 23:46:05.687535 containerd[1485]: time="2025-05-15T23:46:05.687498260Z" level=info msg="StartContainer for \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\" returns successfully" May 15 23:46:05.719164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490-rootfs.mount: Deactivated successfully. May 15 23:46:05.762179 containerd[1485]: time="2025-05-15T23:46:05.762080022Z" level=info msg="shim disconnected" id=b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490 namespace=k8s.io May 15 23:46:05.762179 containerd[1485]: time="2025-05-15T23:46:05.762173903Z" level=warning msg="cleaning up after shim disconnected" id=b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490 namespace=k8s.io May 15 23:46:05.762179 containerd[1485]: time="2025-05-15T23:46:05.762195663Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:46:05.792669 containerd[1485]: time="2025-05-15T23:46:05.792522818Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:46:05.793837 containerd[1485]: time="2025-05-15T23:46:05.793790422Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 23:46:05.795427 containerd[1485]: time="2025-05-15T23:46:05.795362348Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:46:05.798547 containerd[1485]: time="2025-05-15T23:46:05.798345120Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.116713329s" May 15 23:46:05.798547 containerd[1485]: time="2025-05-15T23:46:05.798404480Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 23:46:05.803721 containerd[1485]: time="2025-05-15T23:46:05.803665700Z" level=info msg="CreateContainer within sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:46:05.821493 containerd[1485]: time="2025-05-15T23:46:05.821262326Z" level=info msg="CreateContainer within sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\"" May 15 23:46:05.822994 containerd[1485]: time="2025-05-15T23:46:05.822863732Z" level=info msg="StartContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\"" May 15 23:46:05.857404 systemd[1]: Started cri-containerd-7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8.scope - libcontainer container 7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8. May 15 23:46:05.890627 containerd[1485]: time="2025-05-15T23:46:05.890579509Z" level=info msg="StartContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" returns successfully" May 15 23:46:06.556155 containerd[1485]: time="2025-05-15T23:46:06.556105203Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:46:06.576416 containerd[1485]: time="2025-05-15T23:46:06.576221593Z" level=info msg="CreateContainer within sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\"" May 15 23:46:06.579714 containerd[1485]: time="2025-05-15T23:46:06.578614326Z" level=info msg="StartContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\"" May 15 23:46:06.628451 systemd[1]: Started cri-containerd-dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137.scope - libcontainer container dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137. May 15 23:46:06.644590 kubelet[2719]: I0515 23:46:06.644515 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vv29b" podStartSLOduration=1.891502287 podStartE2EDuration="10.644475527s" podCreationTimestamp="2025-05-15 23:45:56 +0000 UTC" firstStartedPulling="2025-05-15 23:45:57.046385803 +0000 UTC m=+7.712517293" lastFinishedPulling="2025-05-15 23:46:05.799359043 +0000 UTC m=+16.465490533" observedRunningTime="2025-05-15 23:46:06.636629764 +0000 UTC m=+17.302761254" watchObservedRunningTime="2025-05-15 23:46:06.644475527 +0000 UTC m=+17.310607057" May 15 23:46:06.738706 containerd[1485]: time="2025-05-15T23:46:06.738623642Z" level=info msg="StartContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" returns successfully" May 15 23:46:06.923645 kubelet[2719]: I0515 23:46:06.922658 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 15 23:46:06.982395 systemd[1]: Created slice kubepods-burstable-pod543dd7bd_b7b8_4ab1_930f_11461b6ddfe2.slice - libcontainer container kubepods-burstable-pod543dd7bd_b7b8_4ab1_930f_11461b6ddfe2.slice. May 15 23:46:06.992664 systemd[1]: Created slice kubepods-burstable-podf0378dee_52e9_4e7c_922d_cd68bb5af9e0.slice - libcontainer container kubepods-burstable-podf0378dee_52e9_4e7c_922d_cd68bb5af9e0.slice. May 15 23:46:07.080723 kubelet[2719]: I0515 23:46:07.080667 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4j2\" (UniqueName: \"kubernetes.io/projected/f0378dee-52e9-4e7c-922d-cd68bb5af9e0-kube-api-access-mb4j2\") pod \"coredns-674b8bbfcf-hzf8s\" (UID: \"f0378dee-52e9-4e7c-922d-cd68bb5af9e0\") " pod="kube-system/coredns-674b8bbfcf-hzf8s" May 15 23:46:07.080723 kubelet[2719]: I0515 23:46:07.080718 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0378dee-52e9-4e7c-922d-cd68bb5af9e0-config-volume\") pod \"coredns-674b8bbfcf-hzf8s\" (UID: \"f0378dee-52e9-4e7c-922d-cd68bb5af9e0\") " pod="kube-system/coredns-674b8bbfcf-hzf8s" May 15 23:46:07.081009 kubelet[2719]: I0515 23:46:07.080755 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543dd7bd-b7b8-4ab1-930f-11461b6ddfe2-config-volume\") pod \"coredns-674b8bbfcf-2cxhq\" (UID: \"543dd7bd-b7b8-4ab1-930f-11461b6ddfe2\") " pod="kube-system/coredns-674b8bbfcf-2cxhq" May 15 23:46:07.081009 kubelet[2719]: I0515 23:46:07.080779 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cm2q\" (UniqueName: \"kubernetes.io/projected/543dd7bd-b7b8-4ab1-930f-11461b6ddfe2-kube-api-access-5cm2q\") pod \"coredns-674b8bbfcf-2cxhq\" (UID: \"543dd7bd-b7b8-4ab1-930f-11461b6ddfe2\") " pod="kube-system/coredns-674b8bbfcf-2cxhq" May 15 23:46:07.288969 containerd[1485]: time="2025-05-15T23:46:07.288852201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2cxhq,Uid:543dd7bd-b7b8-4ab1-930f-11461b6ddfe2,Namespace:kube-system,Attempt:0,}" May 15 23:46:07.299366 containerd[1485]: time="2025-05-15T23:46:07.299321275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hzf8s,Uid:f0378dee-52e9-4e7c-922d-cd68bb5af9e0,Namespace:kube-system,Attempt:0,}" May 15 23:46:09.833242 systemd-networkd[1382]: cilium_host: Link UP May 15 23:46:09.836028 systemd-networkd[1382]: cilium_net: Link UP May 15 23:46:09.836775 systemd-networkd[1382]: cilium_net: Gained carrier May 15 23:46:09.836957 systemd-networkd[1382]: cilium_host: Gained carrier May 15 23:46:09.878399 systemd-networkd[1382]: cilium_net: Gained IPv6LL May 15 23:46:09.967166 systemd-networkd[1382]: cilium_vxlan: Link UP May 15 23:46:09.967176 systemd-networkd[1382]: cilium_vxlan: Gained carrier May 15 23:46:10.140485 systemd-networkd[1382]: cilium_host: Gained IPv6LL May 15 23:46:10.269322 kernel: NET: Registered PF_ALG protocol family May 15 23:46:10.981853 systemd-networkd[1382]: lxc_health: Link UP May 15 23:46:10.995733 systemd-networkd[1382]: lxc_health: Gained carrier May 15 23:46:11.349757 systemd-networkd[1382]: lxcb2f31a9e7b2c: Link UP May 15 23:46:11.355315 kernel: eth0: renamed from tmp2d12b May 15 23:46:11.360245 systemd-networkd[1382]: lxcb2f31a9e7b2c: Gained carrier May 15 23:46:11.380327 systemd-networkd[1382]: lxcac6cc79d2214: Link UP May 15 23:46:11.387051 kernel: eth0: renamed from tmp85877 May 15 23:46:11.389507 systemd-networkd[1382]: lxcac6cc79d2214: Gained carrier May 15 23:46:11.621352 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL May 15 23:46:12.388884 systemd-networkd[1382]: lxc_health: Gained IPv6LL May 15 23:46:12.516366 systemd-networkd[1382]: lxcb2f31a9e7b2c: Gained IPv6LL May 15 23:46:12.518919 systemd-networkd[1382]: lxcac6cc79d2214: Gained IPv6LL May 15 23:46:12.767160 kubelet[2719]: I0515 23:46:12.767014 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-thnx7" podStartSLOduration=10.937111424 podStartE2EDuration="16.766997548s" podCreationTimestamp="2025-05-15 23:45:56 +0000 UTC" firstStartedPulling="2025-05-15 23:45:56.849170431 +0000 UTC m=+7.515301921" lastFinishedPulling="2025-05-15 23:46:02.679056555 +0000 UTC m=+13.345188045" observedRunningTime="2025-05-15 23:46:07.578578698 +0000 UTC m=+18.244710228" watchObservedRunningTime="2025-05-15 23:46:12.766997548 +0000 UTC m=+23.433128998" May 15 23:46:15.460150 containerd[1485]: time="2025-05-15T23:46:15.458932325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:46:15.460150 containerd[1485]: time="2025-05-15T23:46:15.459113489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:46:15.460150 containerd[1485]: time="2025-05-15T23:46:15.459133009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:46:15.462039 containerd[1485]: time="2025-05-15T23:46:15.461347850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:46:15.476262 containerd[1485]: time="2025-05-15T23:46:15.466507545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:46:15.476262 containerd[1485]: time="2025-05-15T23:46:15.466571146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:46:15.476262 containerd[1485]: time="2025-05-15T23:46:15.466587027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:46:15.476262 containerd[1485]: time="2025-05-15T23:46:15.466669108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:46:15.507569 systemd[1]: Started cri-containerd-2d12b0daa3792abab66ecdc9e3d585ce7d2ee23a441984253661c2ff75aa6769.scope - libcontainer container 2d12b0daa3792abab66ecdc9e3d585ce7d2ee23a441984253661c2ff75aa6769. May 15 23:46:15.516547 systemd[1]: Started cri-containerd-8587739709a726e0ecfd4cb375beb5a1fd6e5677b57495fac77abcc18e513f45.scope - libcontainer container 8587739709a726e0ecfd4cb375beb5a1fd6e5677b57495fac77abcc18e513f45. May 15 23:46:15.576693 containerd[1485]: time="2025-05-15T23:46:15.576631736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2cxhq,Uid:543dd7bd-b7b8-4ab1-930f-11461b6ddfe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d12b0daa3792abab66ecdc9e3d585ce7d2ee23a441984253661c2ff75aa6769\"" May 15 23:46:15.590728 containerd[1485]: time="2025-05-15T23:46:15.590672435Z" level=info msg="CreateContainer within sandbox \"2d12b0daa3792abab66ecdc9e3d585ce7d2ee23a441984253661c2ff75aa6769\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:46:15.593166 containerd[1485]: time="2025-05-15T23:46:15.593116960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hzf8s,Uid:f0378dee-52e9-4e7c-922d-cd68bb5af9e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8587739709a726e0ecfd4cb375beb5a1fd6e5677b57495fac77abcc18e513f45\"" May 15 23:46:15.602653 containerd[1485]: time="2025-05-15T23:46:15.602591135Z" level=info msg="CreateContainer within sandbox \"8587739709a726e0ecfd4cb375beb5a1fd6e5677b57495fac77abcc18e513f45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:46:15.626403 containerd[1485]: time="2025-05-15T23:46:15.625932486Z" level=info msg="CreateContainer within sandbox \"2d12b0daa3792abab66ecdc9e3d585ce7d2ee23a441984253661c2ff75aa6769\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d18e9b8a70a72280b2839d9441126db0f6ce5f87256ca7050d0af06ca30d5d\"" May 15 23:46:15.628012 containerd[1485]: time="2025-05-15T23:46:15.627496714Z" level=info msg="CreateContainer within sandbox \"8587739709a726e0ecfd4cb375beb5a1fd6e5677b57495fac77abcc18e513f45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7be3be443b670c784f404c8d0a478a91b64891297d2a291260813d225311e0b8\"" May 15 23:46:15.628012 containerd[1485]: time="2025-05-15T23:46:15.627730319Z" level=info msg="StartContainer for \"70d18e9b8a70a72280b2839d9441126db0f6ce5f87256ca7050d0af06ca30d5d\"" May 15 23:46:15.631948 containerd[1485]: time="2025-05-15T23:46:15.629917239Z" level=info msg="StartContainer for \"7be3be443b670c784f404c8d0a478a91b64891297d2a291260813d225311e0b8\"" May 15 23:46:15.673635 systemd[1]: Started cri-containerd-7be3be443b670c784f404c8d0a478a91b64891297d2a291260813d225311e0b8.scope - libcontainer container 7be3be443b670c784f404c8d0a478a91b64891297d2a291260813d225311e0b8. May 15 23:46:15.689462 systemd[1]: Started cri-containerd-70d18e9b8a70a72280b2839d9441126db0f6ce5f87256ca7050d0af06ca30d5d.scope - libcontainer container 70d18e9b8a70a72280b2839d9441126db0f6ce5f87256ca7050d0af06ca30d5d. May 15 23:46:15.719982 containerd[1485]: time="2025-05-15T23:46:15.719124045Z" level=info msg="StartContainer for \"7be3be443b670c784f404c8d0a478a91b64891297d2a291260813d225311e0b8\" returns successfully" May 15 23:46:15.737677 containerd[1485]: time="2025-05-15T23:46:15.737611306Z" level=info msg="StartContainer for \"70d18e9b8a70a72280b2839d9441126db0f6ce5f87256ca7050d0af06ca30d5d\" returns successfully" May 15 23:46:16.469357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758877280.mount: Deactivated successfully. May 15 23:46:16.600505 kubelet[2719]: I0515 23:46:16.599403 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hzf8s" podStartSLOduration=20.599378808 podStartE2EDuration="20.599378808s" podCreationTimestamp="2025-05-15 23:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:46:16.599026281 +0000 UTC m=+27.265157771" watchObservedRunningTime="2025-05-15 23:46:16.599378808 +0000 UTC m=+27.265510378" May 15 23:46:16.617659 kubelet[2719]: I0515 23:46:16.617560 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2cxhq" podStartSLOduration=20.617531445 podStartE2EDuration="20.617531445s" podCreationTimestamp="2025-05-15 23:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:46:16.616898713 +0000 UTC m=+27.283030163" watchObservedRunningTime="2025-05-15 23:46:16.617531445 +0000 UTC m=+27.283663015" May 15 23:50:30.196769 systemd[1]: Started sshd@8-168.119.108.125:22-139.178.68.195:38332.service - OpenSSH per-connection server daemon (139.178.68.195:38332). May 15 23:50:31.192159 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 38332 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:31.194133 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:31.200696 systemd-logind[1460]: New session 8 of user core. May 15 23:50:31.203645 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:50:31.977114 sshd[4156]: Connection closed by 139.178.68.195 port 38332 May 15 23:50:31.977592 sshd-session[4154]: pam_unix(sshd:session): session closed for user core May 15 23:50:31.982623 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. May 15 23:50:31.982752 systemd[1]: sshd@8-168.119.108.125:22-139.178.68.195:38332.service: Deactivated successfully. May 15 23:50:31.986331 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:50:31.990974 systemd-logind[1460]: Removed session 8. May 15 23:50:37.158679 systemd[1]: Started sshd@9-168.119.108.125:22-139.178.68.195:38834.service - OpenSSH per-connection server daemon (139.178.68.195:38834). May 15 23:50:38.170327 sshd[4167]: Accepted publickey for core from 139.178.68.195 port 38834 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:38.173431 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:38.181069 systemd-logind[1460]: New session 9 of user core. May 15 23:50:38.189616 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:50:38.940473 sshd[4169]: Connection closed by 139.178.68.195 port 38834 May 15 23:50:38.941428 sshd-session[4167]: pam_unix(sshd:session): session closed for user core May 15 23:50:38.946751 systemd[1]: sshd@9-168.119.108.125:22-139.178.68.195:38834.service: Deactivated successfully. May 15 23:50:38.950160 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:50:38.952065 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. May 15 23:50:38.954243 systemd-logind[1460]: Removed session 9. May 15 23:50:44.123717 systemd[1]: Started sshd@10-168.119.108.125:22-139.178.68.195:58936.service - OpenSSH per-connection server daemon (139.178.68.195:58936). May 15 23:50:45.120232 sshd[4181]: Accepted publickey for core from 139.178.68.195 port 58936 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:45.122171 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:45.128178 systemd-logind[1460]: New session 10 of user core. May 15 23:50:45.135462 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:50:45.887101 sshd[4183]: Connection closed by 139.178.68.195 port 58936 May 15 23:50:45.888048 sshd-session[4181]: pam_unix(sshd:session): session closed for user core May 15 23:50:45.894378 systemd[1]: sshd@10-168.119.108.125:22-139.178.68.195:58936.service: Deactivated successfully. May 15 23:50:45.896830 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:50:45.898797 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. May 15 23:50:45.899939 systemd-logind[1460]: Removed session 10. May 15 23:50:46.068671 systemd[1]: Started sshd@11-168.119.108.125:22-139.178.68.195:58950.service - OpenSSH per-connection server daemon (139.178.68.195:58950). May 15 23:50:47.080433 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 58950 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:47.082459 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:47.087667 systemd-logind[1460]: New session 11 of user core. May 15 23:50:47.093567 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:50:47.902860 sshd[4197]: Connection closed by 139.178.68.195 port 58950 May 15 23:50:47.903538 sshd-session[4195]: pam_unix(sshd:session): session closed for user core May 15 23:50:47.908714 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. May 15 23:50:47.909434 systemd[1]: sshd@11-168.119.108.125:22-139.178.68.195:58950.service: Deactivated successfully. May 15 23:50:47.911866 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:50:47.914568 systemd-logind[1460]: Removed session 11. May 15 23:50:48.081823 systemd[1]: Started sshd@12-168.119.108.125:22-139.178.68.195:58954.service - OpenSSH per-connection server daemon (139.178.68.195:58954). May 15 23:50:49.080130 sshd[4207]: Accepted publickey for core from 139.178.68.195 port 58954 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:49.082351 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:49.088739 systemd-logind[1460]: New session 12 of user core. May 15 23:50:49.095564 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:50:49.858537 sshd[4209]: Connection closed by 139.178.68.195 port 58954 May 15 23:50:49.859602 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 15 23:50:49.864541 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. May 15 23:50:49.865319 systemd[1]: sshd@12-168.119.108.125:22-139.178.68.195:58954.service: Deactivated successfully. May 15 23:50:49.867554 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:50:49.869032 systemd-logind[1460]: Removed session 12. May 15 23:50:52.582389 systemd[1]: Started sshd@13-168.119.108.125:22-103.232.80.5:53434.service - OpenSSH per-connection server daemon (103.232.80.5:53434). May 15 23:50:53.096057 sshd[4222]: Connection closed by 103.232.80.5 port 53434 [preauth] May 15 23:50:53.098500 systemd[1]: sshd@13-168.119.108.125:22-103.232.80.5:53434.service: Deactivated successfully. May 15 23:50:55.041600 systemd[1]: Started sshd@14-168.119.108.125:22-139.178.68.195:44250.service - OpenSSH per-connection server daemon (139.178.68.195:44250). May 15 23:50:56.049680 sshd[4227]: Accepted publickey for core from 139.178.68.195 port 44250 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:56.051946 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:56.056943 systemd-logind[1460]: New session 13 of user core. May 15 23:50:56.069608 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:50:56.834963 sshd[4229]: Connection closed by 139.178.68.195 port 44250 May 15 23:50:56.833657 sshd-session[4227]: pam_unix(sshd:session): session closed for user core May 15 23:50:56.840892 systemd[1]: sshd@14-168.119.108.125:22-139.178.68.195:44250.service: Deactivated successfully. May 15 23:50:56.843773 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:50:56.845364 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. May 15 23:50:56.846738 systemd-logind[1460]: Removed session 13. May 15 23:50:57.012663 systemd[1]: Started sshd@15-168.119.108.125:22-139.178.68.195:44260.service - OpenSSH per-connection server daemon (139.178.68.195:44260). May 15 23:50:58.009027 sshd[4240]: Accepted publickey for core from 139.178.68.195 port 44260 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:50:58.011357 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:58.017845 systemd-logind[1460]: New session 14 of user core. May 15 23:50:58.029581 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:50:58.813215 sshd[4244]: Connection closed by 139.178.68.195 port 44260 May 15 23:50:58.814457 sshd-session[4240]: pam_unix(sshd:session): session closed for user core May 15 23:50:58.819559 systemd[1]: sshd@15-168.119.108.125:22-139.178.68.195:44260.service: Deactivated successfully. May 15 23:50:58.821510 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:50:58.822485 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. May 15 23:50:58.824903 systemd-logind[1460]: Removed session 14. May 15 23:50:58.996658 systemd[1]: Started sshd@16-168.119.108.125:22-139.178.68.195:44264.service - OpenSSH per-connection server daemon (139.178.68.195:44264). May 15 23:50:59.871280 update_engine[1463]: I20250515 23:50:59.870615 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 23:50:59.871280 update_engine[1463]: I20250515 23:50:59.870675 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 23:50:59.871280 update_engine[1463]: I20250515 23:50:59.870942 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871472 1463 omaha_request_params.cc:62] Current group set to stable May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871608 1463 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871624 1463 update_attempter.cc:643] Scheduling an action processor start. May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871644 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871680 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871747 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871757 1463 omaha_request_action.cc:272] Request: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: May 15 23:50:59.871993 update_engine[1463]: I20250515 23:50:59.871766 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 23:50:59.873619 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 23:50:59.874168 update_engine[1463]: I20250515 23:50:59.874102 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 23:50:59.874682 update_engine[1463]: I20250515 23:50:59.874567 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 23:50:59.875013 update_engine[1463]: E20250515 23:50:59.874966 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 23:50:59.875079 update_engine[1463]: I20250515 23:50:59.875036 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 23:51:00.006227 sshd[4252]: Accepted publickey for core from 139.178.68.195 port 44264 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:00.008501 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:00.014416 systemd-logind[1460]: New session 15 of user core. May 15 23:51:00.018442 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:51:01.727871 sshd[4254]: Connection closed by 139.178.68.195 port 44264 May 15 23:51:01.728478 sshd-session[4252]: pam_unix(sshd:session): session closed for user core May 15 23:51:01.733950 systemd[1]: sshd@16-168.119.108.125:22-139.178.68.195:44264.service: Deactivated successfully. May 15 23:51:01.736954 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:51:01.739891 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. May 15 23:51:01.741176 systemd-logind[1460]: Removed session 15. May 15 23:51:01.908737 systemd[1]: Started sshd@17-168.119.108.125:22-139.178.68.195:44276.service - OpenSSH per-connection server daemon (139.178.68.195:44276). May 15 23:51:02.906334 sshd[4270]: Accepted publickey for core from 139.178.68.195 port 44276 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:02.908331 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:02.916401 systemd-logind[1460]: New session 16 of user core. May 15 23:51:02.924608 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:51:03.802668 sshd[4272]: Connection closed by 139.178.68.195 port 44276 May 15 23:51:03.802147 sshd-session[4270]: pam_unix(sshd:session): session closed for user core May 15 23:51:03.808139 systemd[1]: sshd@17-168.119.108.125:22-139.178.68.195:44276.service: Deactivated successfully. May 15 23:51:03.811167 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:51:03.812030 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. May 15 23:51:03.814394 systemd-logind[1460]: Removed session 16. May 15 23:51:03.978647 systemd[1]: Started sshd@18-168.119.108.125:22-139.178.68.195:56812.service - OpenSSH per-connection server daemon (139.178.68.195:56812). May 15 23:51:04.984739 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 56812 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:04.986368 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:04.993056 systemd-logind[1460]: New session 17 of user core. May 15 23:51:04.997377 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:51:05.755142 sshd[4283]: Connection closed by 139.178.68.195 port 56812 May 15 23:51:05.756539 sshd-session[4281]: pam_unix(sshd:session): session closed for user core May 15 23:51:05.762486 systemd[1]: sshd@18-168.119.108.125:22-139.178.68.195:56812.service: Deactivated successfully. May 15 23:51:05.764591 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:51:05.766092 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. May 15 23:51:05.768450 systemd-logind[1460]: Removed session 17. May 15 23:51:09.874281 update_engine[1463]: I20250515 23:51:09.873909 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 23:51:09.875016 update_engine[1463]: I20250515 23:51:09.874344 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 23:51:09.875016 update_engine[1463]: I20250515 23:51:09.874682 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 23:51:09.875498 update_engine[1463]: E20250515 23:51:09.875349 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 23:51:09.875498 update_engine[1463]: I20250515 23:51:09.875453 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 23:51:10.935024 systemd[1]: Started sshd@19-168.119.108.125:22-139.178.68.195:56820.service - OpenSSH per-connection server daemon (139.178.68.195:56820). May 15 23:51:11.933941 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 56820 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:11.936289 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:11.942311 systemd-logind[1460]: New session 18 of user core. May 15 23:51:11.946393 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:51:12.699290 sshd[4298]: Connection closed by 139.178.68.195 port 56820 May 15 23:51:12.700723 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 15 23:51:12.707572 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. May 15 23:51:12.708505 systemd[1]: sshd@19-168.119.108.125:22-139.178.68.195:56820.service: Deactivated successfully. May 15 23:51:12.711095 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:51:12.712698 systemd-logind[1460]: Removed session 18. May 15 23:51:12.885769 systemd[1]: Started sshd@20-168.119.108.125:22-139.178.68.195:56830.service - OpenSSH per-connection server daemon (139.178.68.195:56830). May 15 23:51:13.884331 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 56830 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:13.886498 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:13.895412 systemd-logind[1460]: New session 19 of user core. May 15 23:51:13.908474 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:51:16.249829 containerd[1485]: time="2025-05-15T23:51:16.249718465Z" level=info msg="StopContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" with timeout 30 (s)" May 15 23:51:16.250833 containerd[1485]: time="2025-05-15T23:51:16.250422337Z" level=info msg="Stop container \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" with signal terminated" May 15 23:51:16.266981 containerd[1485]: time="2025-05-15T23:51:16.266856920Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:51:16.271383 systemd[1]: cri-containerd-7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8.scope: Deactivated successfully. May 15 23:51:16.282436 containerd[1485]: time="2025-05-15T23:51:16.282394223Z" level=info msg="StopContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" with timeout 2 (s)" May 15 23:51:16.283566 containerd[1485]: time="2025-05-15T23:51:16.283438470Z" level=info msg="Stop container \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" with signal terminated" May 15 23:51:16.293396 systemd-networkd[1382]: lxc_health: Link DOWN May 15 23:51:16.293430 systemd-networkd[1382]: lxc_health: Lost carrier May 15 23:51:16.310972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8-rootfs.mount: Deactivated successfully. May 15 23:51:16.314331 systemd[1]: cri-containerd-dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137.scope: Deactivated successfully. May 15 23:51:16.314680 systemd[1]: cri-containerd-dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137.scope: Consumed 7.960s CPU time. May 15 23:51:16.330778 containerd[1485]: time="2025-05-15T23:51:16.330680487Z" level=info msg="shim disconnected" id=7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8 namespace=k8s.io May 15 23:51:16.330778 containerd[1485]: time="2025-05-15T23:51:16.330735849Z" level=warning msg="cleaning up after shim disconnected" id=7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8 namespace=k8s.io May 15 23:51:16.330778 containerd[1485]: time="2025-05-15T23:51:16.330748490Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:16.335687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137-rootfs.mount: Deactivated successfully. May 15 23:51:16.341592 containerd[1485]: time="2025-05-15T23:51:16.341452134Z" level=info msg="shim disconnected" id=dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137 namespace=k8s.io May 15 23:51:16.341592 containerd[1485]: time="2025-05-15T23:51:16.341531697Z" level=warning msg="cleaning up after shim disconnected" id=dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137 namespace=k8s.io May 15 23:51:16.341592 containerd[1485]: time="2025-05-15T23:51:16.341542658Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:16.363630 containerd[1485]: time="2025-05-15T23:51:16.363332564Z" level=info msg="StopContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" returns successfully" May 15 23:51:16.366211 containerd[1485]: time="2025-05-15T23:51:16.364701505Z" level=info msg="StopPodSandbox for \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\"" May 15 23:51:16.366211 containerd[1485]: time="2025-05-15T23:51:16.364746547Z" level=info msg="Container to stop \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.367333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462-shm.mount: Deactivated successfully. May 15 23:51:16.374640 containerd[1485]: time="2025-05-15T23:51:16.374300100Z" level=info msg="StopContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" returns successfully" May 15 23:51:16.374794 systemd[1]: cri-containerd-be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462.scope: Deactivated successfully. May 15 23:51:16.377206 containerd[1485]: time="2025-05-15T23:51:16.377082425Z" level=info msg="StopPodSandbox for \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\"" May 15 23:51:16.377668 containerd[1485]: time="2025-05-15T23:51:16.377125867Z" level=info msg="Container to stop \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.377668 containerd[1485]: time="2025-05-15T23:51:16.377595849Z" level=info msg="Container to stop \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.377668 containerd[1485]: time="2025-05-15T23:51:16.377614609Z" level=info msg="Container to stop \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.377668 containerd[1485]: time="2025-05-15T23:51:16.377623690Z" level=info msg="Container to stop \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.377668 containerd[1485]: time="2025-05-15T23:51:16.377631690Z" level=info msg="Container to stop \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:51:16.379901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f-shm.mount: Deactivated successfully. May 15 23:51:16.387598 systemd[1]: cri-containerd-5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f.scope: Deactivated successfully. May 15 23:51:16.420761 containerd[1485]: time="2025-05-15T23:51:16.420674557Z" level=info msg="shim disconnected" id=be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462 namespace=k8s.io May 15 23:51:16.420761 containerd[1485]: time="2025-05-15T23:51:16.420763001Z" level=warning msg="cleaning up after shim disconnected" id=be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462 namespace=k8s.io May 15 23:51:16.421248 containerd[1485]: time="2025-05-15T23:51:16.420779522Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:16.421248 containerd[1485]: time="2025-05-15T23:51:16.420688518Z" level=info msg="shim disconnected" id=5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f namespace=k8s.io May 15 23:51:16.421248 containerd[1485]: time="2025-05-15T23:51:16.420847365Z" level=warning msg="cleaning up after shim disconnected" id=5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f namespace=k8s.io May 15 23:51:16.421248 containerd[1485]: time="2025-05-15T23:51:16.420862206Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:16.444657 containerd[1485]: time="2025-05-15T23:51:16.444417671Z" level=info msg="TearDown network for sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" successfully" May 15 23:51:16.444657 containerd[1485]: time="2025-05-15T23:51:16.444455393Z" level=info msg="StopPodSandbox for \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" returns successfully" May 15 23:51:16.447010 containerd[1485]: time="2025-05-15T23:51:16.446822340Z" level=info msg="TearDown network for sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" successfully" May 15 23:51:16.447010 containerd[1485]: time="2025-05-15T23:51:16.446869302Z" level=info msg="StopPodSandbox for \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" returns successfully" May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597388 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-hubble-tls\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597436 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-kernel\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597467 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz978\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-kube-api-access-zz978\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597490 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-cilium-config-path\") pod \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\" (UID: \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\") " May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597513 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-config-path\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598231 kubelet[2719]: I0515 23:51:16.597530 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-net\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597552 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a33323a-c06b-4d74-96b7-b79400cedddf-clustermesh-secrets\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597573 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-cgroup\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597589 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-hostproc\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597606 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cni-path\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597623 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-bpf-maps\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.598890 kubelet[2719]: I0515 23:51:16.597645 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbx6m\" (UniqueName: \"kubernetes.io/projected/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-kube-api-access-cbx6m\") pod \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\" (UID: \"a1e3c368-2bbd-49ab-89af-a59c5c0cff19\") " May 15 23:51:16.599060 kubelet[2719]: I0515 23:51:16.597669 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-run\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.599060 kubelet[2719]: I0515 23:51:16.597690 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-lib-modules\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.599060 kubelet[2719]: I0515 23:51:16.597712 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-etc-cni-netd\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.599060 kubelet[2719]: I0515 23:51:16.597728 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-xtables-lock\") pod \"6a33323a-c06b-4d74-96b7-b79400cedddf\" (UID: \"6a33323a-c06b-4d74-96b7-b79400cedddf\") " May 15 23:51:16.599060 kubelet[2719]: I0515 23:51:16.597837 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.599360 kubelet[2719]: I0515 23:51:16.599320 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.599822 kubelet[2719]: I0515 23:51:16.599472 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601422 kubelet[2719]: I0515 23:51:16.601376 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:51:16.601519 kubelet[2719]: I0515 23:51:16.601437 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601519 kubelet[2719]: I0515 23:51:16.601459 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601519 kubelet[2719]: I0515 23:51:16.601477 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601638 kubelet[2719]: I0515 23:51:16.601617 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601678 kubelet[2719]: I0515 23:51:16.601642 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601678 kubelet[2719]: I0515 23:51:16.601661 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.601741 kubelet[2719]: I0515 23:51:16.601680 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:51:16.606266 kubelet[2719]: I0515 23:51:16.605512 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 23:51:16.607035 kubelet[2719]: I0515 23:51:16.607002 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1e3c368-2bbd-49ab-89af-a59c5c0cff19" (UID: "a1e3c368-2bbd-49ab-89af-a59c5c0cff19"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 23:51:16.607299 kubelet[2719]: I0515 23:51:16.607273 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-kube-api-access-zz978" (OuterVolumeSpecName: "kube-api-access-zz978") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "kube-api-access-zz978". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:51:16.607857 kubelet[2719]: I0515 23:51:16.607836 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a33323a-c06b-4d74-96b7-b79400cedddf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a33323a-c06b-4d74-96b7-b79400cedddf" (UID: "6a33323a-c06b-4d74-96b7-b79400cedddf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 23:51:16.608543 kubelet[2719]: I0515 23:51:16.608488 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-kube-api-access-cbx6m" (OuterVolumeSpecName: "kube-api-access-cbx6m") pod "a1e3c368-2bbd-49ab-89af-a59c5c0cff19" (UID: "a1e3c368-2bbd-49ab-89af-a59c5c0cff19"). InnerVolumeSpecName "kube-api-access-cbx6m". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698649 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-config-path\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698695 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-net\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698706 2719 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a33323a-c06b-4d74-96b7-b79400cedddf-clustermesh-secrets\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698717 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-cgroup\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698728 2719 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-hostproc\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698736 2719 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cni-path\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.698722 kubelet[2719]: I0515 23:51:16.698744 2719 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-bpf-maps\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698753 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cbx6m\" (UniqueName: \"kubernetes.io/projected/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-kube-api-access-cbx6m\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698764 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-cilium-run\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698772 2719 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-lib-modules\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698780 2719 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-etc-cni-netd\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698788 2719 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-xtables-lock\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698797 2719 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-hubble-tls\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698805 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a33323a-c06b-4d74-96b7-b79400cedddf-host-proc-sys-kernel\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699060 kubelet[2719]: I0515 23:51:16.698813 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zz978\" (UniqueName: \"kubernetes.io/projected/6a33323a-c06b-4d74-96b7-b79400cedddf-kube-api-access-zz978\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:16.699446 kubelet[2719]: I0515 23:51:16.698822 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1e3c368-2bbd-49ab-89af-a59c5c0cff19-cilium-config-path\") on node \"ci-4152-2-3-n-32b6392e63\" DevicePath \"\"" May 15 23:51:17.247957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462-rootfs.mount: Deactivated successfully. May 15 23:51:17.248159 systemd[1]: var-lib-kubelet-pods-a1e3c368\x2d2bbd\x2d49ab\x2d89af\x2da59c5c0cff19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcbx6m.mount: Deactivated successfully. May 15 23:51:17.248358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f-rootfs.mount: Deactivated successfully. May 15 23:51:17.248621 systemd[1]: var-lib-kubelet-pods-6a33323a\x2dc06b\x2d4d74\x2d96b7\x2db79400cedddf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzz978.mount: Deactivated successfully. May 15 23:51:17.249064 systemd[1]: var-lib-kubelet-pods-6a33323a\x2dc06b\x2d4d74\x2d96b7\x2db79400cedddf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:51:17.249263 systemd[1]: var-lib-kubelet-pods-6a33323a\x2dc06b\x2d4d74\x2d96b7\x2db79400cedddf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:51:17.360777 kubelet[2719]: I0515 23:51:17.360173 2719 scope.go:117] "RemoveContainer" containerID="7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8" May 15 23:51:17.365710 containerd[1485]: time="2025-05-15T23:51:17.365658514Z" level=info msg="RemoveContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\"" May 15 23:51:17.371113 systemd[1]: Removed slice kubepods-besteffort-poda1e3c368_2bbd_49ab_89af_a59c5c0cff19.slice - libcontainer container kubepods-besteffort-poda1e3c368_2bbd_49ab_89af_a59c5c0cff19.slice. May 15 23:51:17.373507 containerd[1485]: time="2025-05-15T23:51:17.371634504Z" level=info msg="RemoveContainer for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" returns successfully" May 15 23:51:17.376352 kubelet[2719]: I0515 23:51:17.375613 2719 scope.go:117] "RemoveContainer" containerID="7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8" May 15 23:51:17.376541 containerd[1485]: time="2025-05-15T23:51:17.376325796Z" level=error msg="ContainerStatus for \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\": not found" May 15 23:51:17.377589 kubelet[2719]: E0515 23:51:17.376863 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\": not found" containerID="7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8" May 15 23:51:17.377589 kubelet[2719]: I0515 23:51:17.376900 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8"} err="failed to get container status \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b0f793c41acce3d6c3f2a9e6d2c70bd4bc3510edd80428fad6a7b58cd85eed8\": not found" May 15 23:51:17.377589 kubelet[2719]: I0515 23:51:17.376948 2719 scope.go:117] "RemoveContainer" containerID="dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137" May 15 23:51:17.379053 containerd[1485]: time="2025-05-15T23:51:17.378483174Z" level=info msg="RemoveContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\"" May 15 23:51:17.381144 systemd[1]: Removed slice kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice - libcontainer container kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice. May 15 23:51:17.381486 systemd[1]: kubepods-burstable-pod6a33323a_c06b_4d74_96b7_b79400cedddf.slice: Consumed 8.054s CPU time. May 15 23:51:17.386513 containerd[1485]: time="2025-05-15T23:51:17.386456015Z" level=info msg="RemoveContainer for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" returns successfully" May 15 23:51:17.386876 kubelet[2719]: I0515 23:51:17.386844 2719 scope.go:117] "RemoveContainer" containerID="b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490" May 15 23:51:17.388553 containerd[1485]: time="2025-05-15T23:51:17.388265017Z" level=info msg="RemoveContainer for \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\"" May 15 23:51:17.393156 containerd[1485]: time="2025-05-15T23:51:17.393103076Z" level=info msg="RemoveContainer for \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\" returns successfully" May 15 23:51:17.393752 kubelet[2719]: I0515 23:51:17.393598 2719 scope.go:117] "RemoveContainer" containerID="86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4" May 15 23:51:17.395002 containerd[1485]: time="2025-05-15T23:51:17.394944959Z" level=info msg="RemoveContainer for \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\"" May 15 23:51:17.400058 containerd[1485]: time="2025-05-15T23:51:17.399717495Z" level=info msg="RemoveContainer for \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\" returns successfully" May 15 23:51:17.400175 kubelet[2719]: I0515 23:51:17.399949 2719 scope.go:117] "RemoveContainer" containerID="e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83" May 15 23:51:17.403292 containerd[1485]: time="2025-05-15T23:51:17.403161931Z" level=info msg="RemoveContainer for \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\"" May 15 23:51:17.407492 containerd[1485]: time="2025-05-15T23:51:17.407452646Z" level=info msg="RemoveContainer for \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\" returns successfully" May 15 23:51:17.408254 kubelet[2719]: I0515 23:51:17.407771 2719 scope.go:117] "RemoveContainer" containerID="04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73" May 15 23:51:17.409252 containerd[1485]: time="2025-05-15T23:51:17.408839108Z" level=info msg="RemoveContainer for \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\"" May 15 23:51:17.413511 containerd[1485]: time="2025-05-15T23:51:17.412635400Z" level=info msg="RemoveContainer for \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\" returns successfully" May 15 23:51:17.413875 kubelet[2719]: I0515 23:51:17.413853 2719 scope.go:117] "RemoveContainer" containerID="dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137" May 15 23:51:17.414409 containerd[1485]: time="2025-05-15T23:51:17.414354678Z" level=error msg="ContainerStatus for \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\": not found" May 15 23:51:17.414643 kubelet[2719]: E0515 23:51:17.414623 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\": not found" containerID="dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137" May 15 23:51:17.414778 kubelet[2719]: I0515 23:51:17.414755 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137"} err="failed to get container status \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\": rpc error: code = NotFound desc = an error occurred when try to find container \"dca6e67ec3d4ed1eeafe9db162d22a07fb98aefc1f7d6498ff12465e6e7ae137\": not found" May 15 23:51:17.415679 kubelet[2719]: I0515 23:51:17.415652 2719 scope.go:117] "RemoveContainer" containerID="b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490" May 15 23:51:17.417381 containerd[1485]: time="2025-05-15T23:51:17.417337773Z" level=error msg="ContainerStatus for \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\": not found" May 15 23:51:17.417596 kubelet[2719]: E0515 23:51:17.417573 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\": not found" containerID="b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490" May 15 23:51:17.417630 kubelet[2719]: I0515 23:51:17.417604 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490"} err="failed to get container status \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\": rpc error: code = NotFound desc = an error occurred when try to find container \"b20ea38e0b0b7b17bd96832cc3500e4a335402c38956b1b5f69837c7ffec8490\": not found" May 15 23:51:17.417630 kubelet[2719]: I0515 23:51:17.417626 2719 scope.go:117] "RemoveContainer" containerID="86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4" May 15 23:51:17.417835 containerd[1485]: time="2025-05-15T23:51:17.417804514Z" level=error msg="ContainerStatus for \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\": not found" May 15 23:51:17.418847 kubelet[2719]: E0515 23:51:17.418821 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\": not found" containerID="86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4" May 15 23:51:17.418892 kubelet[2719]: I0515 23:51:17.418855 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4"} err="failed to get container status \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"86dcd8071410c40bbaf64d7cb31b848862b394ea92c0c4fd726b86a51d729fb4\": not found" May 15 23:51:17.418892 kubelet[2719]: I0515 23:51:17.418873 2719 scope.go:117] "RemoveContainer" containerID="e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83" May 15 23:51:17.419089 containerd[1485]: time="2025-05-15T23:51:17.419054531Z" level=error msg="ContainerStatus for \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\": not found" May 15 23:51:17.419355 kubelet[2719]: E0515 23:51:17.419204 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\": not found" containerID="e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83" May 15 23:51:17.419355 kubelet[2719]: I0515 23:51:17.419269 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83"} err="failed to get container status \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2d59327419665f8cbed534049d8fecb1c8846a72b5ac062290d4b927c4c7a83\": not found" May 15 23:51:17.419355 kubelet[2719]: I0515 23:51:17.419285 2719 scope.go:117] "RemoveContainer" containerID="04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73" May 15 23:51:17.419645 containerd[1485]: time="2025-05-15T23:51:17.419557674Z" level=error msg="ContainerStatus for \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\": not found" May 15 23:51:17.419756 kubelet[2719]: E0515 23:51:17.419696 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\": not found" containerID="04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73" May 15 23:51:17.419756 kubelet[2719]: I0515 23:51:17.419724 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73"} err="failed to get container status \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\": rpc error: code = NotFound desc = an error occurred when try to find container \"04f31ae4f0dddb29b51b4d4f219b0fe09618448a421256ec27fec2ca810d0e73\": not found" May 15 23:51:17.457130 kubelet[2719]: I0515 23:51:17.457084 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a33323a-c06b-4d74-96b7-b79400cedddf" path="/var/lib/kubelet/pods/6a33323a-c06b-4d74-96b7-b79400cedddf/volumes" May 15 23:51:17.458029 kubelet[2719]: I0515 23:51:17.457992 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e3c368-2bbd-49ab-89af-a59c5c0cff19" path="/var/lib/kubelet/pods/a1e3c368-2bbd-49ab-89af-a59c5c0cff19/volumes" May 15 23:51:18.313646 sshd[4311]: Connection closed by 139.178.68.195 port 56830 May 15 23:51:18.314090 sshd-session[4309]: pam_unix(sshd:session): session closed for user core May 15 23:51:18.318492 systemd[1]: sshd@20-168.119.108.125:22-139.178.68.195:56830.service: Deactivated successfully. May 15 23:51:18.320617 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:51:18.320891 systemd[1]: session-19.scope: Consumed 1.163s CPU time. May 15 23:51:18.322231 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. May 15 23:51:18.323450 systemd-logind[1460]: Removed session 19. May 15 23:51:18.494682 systemd[1]: Started sshd@21-168.119.108.125:22-139.178.68.195:55162.service - OpenSSH per-connection server daemon (139.178.68.195:55162). May 15 23:51:19.491795 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 55162 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:19.493845 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:19.500265 systemd-logind[1460]: New session 20 of user core. May 15 23:51:19.506451 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:51:19.639743 kubelet[2719]: E0515 23:51:19.639701 2719 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:51:19.870803 update_engine[1463]: I20250515 23:51:19.869784 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 23:51:19.870803 update_engine[1463]: I20250515 23:51:19.870252 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 23:51:19.870803 update_engine[1463]: I20250515 23:51:19.870714 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 23:51:19.871773 update_engine[1463]: E20250515 23:51:19.871724 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 23:51:19.871974 update_engine[1463]: I20250515 23:51:19.871937 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 23:51:21.166987 systemd[1]: Created slice kubepods-burstable-podb089417e_ac8d_4cf8_ad60_ffabcc4fb6a5.slice - libcontainer container kubepods-burstable-podb089417e_ac8d_4cf8_ad60_ffabcc4fb6a5.slice. May 15 23:51:21.231703 kubelet[2719]: I0515 23:51:21.231646 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-host-proc-sys-kernel\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232445 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-hostproc\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232514 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-cni-path\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232547 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-lib-modules\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232581 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-cilium-cgroup\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232612 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-hubble-tls\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233020 kubelet[2719]: I0515 23:51:21.232646 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-cilium-run\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232678 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-clustermesh-secrets\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232707 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-bpf-maps\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232735 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-etc-cni-netd\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232762 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-host-proc-sys-net\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232795 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgh8j\" (UniqueName: \"kubernetes.io/projected/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-kube-api-access-jgh8j\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233447 kubelet[2719]: I0515 23:51:21.232849 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-xtables-lock\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233942 kubelet[2719]: I0515 23:51:21.232889 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-cilium-config-path\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.233942 kubelet[2719]: I0515 23:51:21.232935 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5-cilium-ipsec-secrets\") pod \"cilium-269vr\" (UID: \"b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5\") " pod="kube-system/cilium-269vr" May 15 23:51:21.352381 sshd[4476]: Connection closed by 139.178.68.195 port 55162 May 15 23:51:21.349737 sshd-session[4474]: pam_unix(sshd:session): session closed for user core May 15 23:51:21.369326 systemd[1]: sshd@21-168.119.108.125:22-139.178.68.195:55162.service: Deactivated successfully. May 15 23:51:21.374678 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:51:21.377522 systemd[1]: session-20.scope: Consumed 1.033s CPU time. May 15 23:51:21.381711 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. May 15 23:51:21.383426 systemd-logind[1460]: Removed session 20. May 15 23:51:21.474640 containerd[1485]: time="2025-05-15T23:51:21.474267923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-269vr,Uid:b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5,Namespace:kube-system,Attempt:0,}" May 15 23:51:21.502566 containerd[1485]: time="2025-05-15T23:51:21.502233114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:21.503547 containerd[1485]: time="2025-05-15T23:51:21.503359965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:21.503694 containerd[1485]: time="2025-05-15T23:51:21.503488771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:21.504084 containerd[1485]: time="2025-05-15T23:51:21.503899349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:21.525804 systemd[1]: Started cri-containerd-38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7.scope - libcontainer container 38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7. May 15 23:51:21.528272 systemd[1]: Started sshd@22-168.119.108.125:22-139.178.68.195:55174.service - OpenSSH per-connection server daemon (139.178.68.195:55174). May 15 23:51:21.565910 containerd[1485]: time="2025-05-15T23:51:21.565870285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-269vr,Uid:b089417e-ac8d-4cf8-ad60-ffabcc4fb6a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\"" May 15 23:51:21.572698 containerd[1485]: time="2025-05-15T23:51:21.572159691Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:51:21.582498 containerd[1485]: time="2025-05-15T23:51:21.582434838Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d\"" May 15 23:51:21.583238 containerd[1485]: time="2025-05-15T23:51:21.582965622Z" level=info msg="StartContainer for \"6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d\"" May 15 23:51:21.611466 systemd[1]: Started cri-containerd-6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d.scope - libcontainer container 6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d. May 15 23:51:21.648442 containerd[1485]: time="2025-05-15T23:51:21.648385794Z" level=info msg="StartContainer for \"6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d\" returns successfully" May 15 23:51:21.656751 systemd[1]: cri-containerd-6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d.scope: Deactivated successfully. May 15 23:51:21.690146 containerd[1485]: time="2025-05-15T23:51:21.689957563Z" level=info msg="shim disconnected" id=6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d namespace=k8s.io May 15 23:51:21.690354 containerd[1485]: time="2025-05-15T23:51:21.690147851Z" level=warning msg="cleaning up after shim disconnected" id=6e09c8e155720783759013ea66718866039006a5b06c42b42160a6e6f07f4f7d namespace=k8s.io May 15 23:51:21.690354 containerd[1485]: time="2025-05-15T23:51:21.690203574Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:22.395591 containerd[1485]: time="2025-05-15T23:51:22.395440072Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:51:22.410770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387612984.mount: Deactivated successfully. May 15 23:51:22.412264 containerd[1485]: time="2025-05-15T23:51:22.411806816Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba\"" May 15 23:51:22.417580 containerd[1485]: time="2025-05-15T23:51:22.415249013Z" level=info msg="StartContainer for \"435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba\"" May 15 23:51:22.450603 systemd[1]: Started cri-containerd-435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba.scope - libcontainer container 435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba. May 15 23:51:22.478350 containerd[1485]: time="2025-05-15T23:51:22.478290600Z" level=info msg="StartContainer for \"435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba\" returns successfully" May 15 23:51:22.484602 systemd[1]: cri-containerd-435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba.scope: Deactivated successfully. May 15 23:51:22.514217 containerd[1485]: time="2025-05-15T23:51:22.514124629Z" level=info msg="shim disconnected" id=435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba namespace=k8s.io May 15 23:51:22.514669 containerd[1485]: time="2025-05-15T23:51:22.514508567Z" level=warning msg="cleaning up after shim disconnected" id=435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba namespace=k8s.io May 15 23:51:22.514669 containerd[1485]: time="2025-05-15T23:51:22.514533168Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:22.545256 sshd[4517]: Accepted publickey for core from 139.178.68.195 port 55174 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:22.546554 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:22.553257 systemd-logind[1460]: New session 21 of user core. May 15 23:51:22.559796 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:51:23.233091 sshd[4659]: Connection closed by 139.178.68.195 port 55174 May 15 23:51:23.233706 sshd-session[4517]: pam_unix(sshd:session): session closed for user core May 15 23:51:23.239621 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. May 15 23:51:23.240391 systemd[1]: sshd@22-168.119.108.125:22-139.178.68.195:55174.service: Deactivated successfully. May 15 23:51:23.243809 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:51:23.247566 systemd-logind[1460]: Removed session 21. May 15 23:51:23.343572 systemd[1]: run-containerd-runc-k8s.io-435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba-runc.i8kE3l.mount: Deactivated successfully. May 15 23:51:23.343684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-435faeb585430b9cbc3b4e4ce407ad0ac5f0dace3177c34585eb52fd71446dba-rootfs.mount: Deactivated successfully. May 15 23:51:23.404102 containerd[1485]: time="2025-05-15T23:51:23.404015472Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:51:23.414241 systemd[1]: Started sshd@23-168.119.108.125:22-139.178.68.195:55188.service - OpenSSH per-connection server daemon (139.178.68.195:55188). May 15 23:51:23.431686 containerd[1485]: time="2025-05-15T23:51:23.431636969Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033\"" May 15 23:51:23.433534 containerd[1485]: time="2025-05-15T23:51:23.433048273Z" level=info msg="StartContainer for \"220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033\"" May 15 23:51:23.476588 systemd[1]: Started cri-containerd-220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033.scope - libcontainer container 220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033. May 15 23:51:23.512762 containerd[1485]: time="2025-05-15T23:51:23.512617055Z" level=info msg="StartContainer for \"220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033\" returns successfully" May 15 23:51:23.516831 systemd[1]: cri-containerd-220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033.scope: Deactivated successfully. May 15 23:51:23.544923 containerd[1485]: time="2025-05-15T23:51:23.544849482Z" level=info msg="shim disconnected" id=220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033 namespace=k8s.io May 15 23:51:23.545154 containerd[1485]: time="2025-05-15T23:51:23.545138975Z" level=warning msg="cleaning up after shim disconnected" id=220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033 namespace=k8s.io May 15 23:51:23.545228 containerd[1485]: time="2025-05-15T23:51:23.545214738Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:23.558660 containerd[1485]: time="2025-05-15T23:51:23.558368337Z" level=warning msg="cleanup warnings time=\"2025-05-15T23:51:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 23:51:24.343103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220306fde4430b2c53ca34f9be96d6d7421201c96fd41c32e151a1ce953a4033-rootfs.mount: Deactivated successfully. May 15 23:51:24.414235 containerd[1485]: time="2025-05-15T23:51:24.413387267Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:51:24.423651 sshd[4665]: Accepted publickey for core from 139.178.68.195 port 55188 ssh2: RSA SHA256:wC919pbD295BBEUlcnKsLe8ZohowqtK/lJm32ZhgKz0 May 15 23:51:24.428035 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:24.438561 containerd[1485]: time="2025-05-15T23:51:24.438133274Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4\"" May 15 23:51:24.439016 containerd[1485]: time="2025-05-15T23:51:24.438984673Z" level=info msg="StartContainer for \"d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4\"" May 15 23:51:24.442454 systemd-logind[1460]: New session 22 of user core. May 15 23:51:24.448467 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:51:24.491420 systemd[1]: Started cri-containerd-d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4.scope - libcontainer container d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4. May 15 23:51:24.522619 systemd[1]: cri-containerd-d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4.scope: Deactivated successfully. May 15 23:51:24.528222 containerd[1485]: time="2025-05-15T23:51:24.528135334Z" level=info msg="StartContainer for \"d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4\" returns successfully" May 15 23:51:24.557698 containerd[1485]: time="2025-05-15T23:51:24.557616837Z" level=info msg="shim disconnected" id=d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4 namespace=k8s.io May 15 23:51:24.557698 containerd[1485]: time="2025-05-15T23:51:24.557682680Z" level=warning msg="cleaning up after shim disconnected" id=d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4 namespace=k8s.io May 15 23:51:24.557698 containerd[1485]: time="2025-05-15T23:51:24.557693080Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:24.643384 kubelet[2719]: E0515 23:51:24.641765 2719 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:51:25.342536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d27f75893da232892e81d783b78d433cf06fac3516e5edf99c8c58a6295345a4-rootfs.mount: Deactivated successfully. May 15 23:51:25.421229 containerd[1485]: time="2025-05-15T23:51:25.420832933Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:51:25.447663 containerd[1485]: time="2025-05-15T23:51:25.447461787Z" level=info msg="CreateContainer within sandbox \"38883fb0ba96ed2678f1224a27f4084998f3ded818127fefa045161dadcf08d7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce\"" May 15 23:51:25.448615 containerd[1485]: time="2025-05-15T23:51:25.448166659Z" level=info msg="StartContainer for \"e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce\"" May 15 23:51:25.484446 systemd[1]: Started cri-containerd-e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce.scope - libcontainer container e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce. May 15 23:51:25.520283 containerd[1485]: time="2025-05-15T23:51:25.519947611Z" level=info msg="StartContainer for \"e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce\" returns successfully" May 15 23:51:25.843263 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 23:51:25.947487 kubelet[2719]: I0515 23:51:25.945648 2719 setters.go:618] "Node became not ready" node="ci-4152-2-3-n-32b6392e63" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:51:25Z","lastTransitionTime":"2025-05-15T23:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:51:26.447104 kubelet[2719]: I0515 23:51:26.446344 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-269vr" podStartSLOduration=5.446321059 podStartE2EDuration="5.446321059s" podCreationTimestamp="2025-05-15 23:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:26.443860827 +0000 UTC m=+337.109992317" watchObservedRunningTime="2025-05-15 23:51:26.446321059 +0000 UTC m=+337.112452549" May 15 23:51:28.788958 systemd-networkd[1382]: lxc_health: Link UP May 15 23:51:28.814270 systemd-networkd[1382]: lxc_health: Gained carrier May 15 23:51:29.311305 systemd[1]: run-containerd-runc-k8s.io-e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce-runc.RQQBcj.mount: Deactivated successfully. May 15 23:51:29.873314 update_engine[1463]: I20250515 23:51:29.873231 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 23:51:29.873667 update_engine[1463]: I20250515 23:51:29.873512 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 23:51:29.873796 update_engine[1463]: I20250515 23:51:29.873764 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 23:51:29.874246 update_engine[1463]: E20250515 23:51:29.874217 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 23:51:29.874310 update_engine[1463]: I20250515 23:51:29.874265 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 23:51:29.874310 update_engine[1463]: I20250515 23:51:29.874274 1463 omaha_request_action.cc:617] Omaha request response: May 15 23:51:29.874367 update_engine[1463]: E20250515 23:51:29.874347 1463 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 23:51:29.874395 update_engine[1463]: I20250515 23:51:29.874368 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 23:51:29.874395 update_engine[1463]: I20250515 23:51:29.874375 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 23:51:29.874395 update_engine[1463]: I20250515 23:51:29.874379 1463 update_attempter.cc:306] Processing Done. May 15 23:51:29.874395 update_engine[1463]: E20250515 23:51:29.874393 1463 update_attempter.cc:619] Update failed. May 15 23:51:29.874474 update_engine[1463]: I20250515 23:51:29.874398 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 23:51:29.874474 update_engine[1463]: I20250515 23:51:29.874403 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 23:51:29.874474 update_engine[1463]: I20250515 23:51:29.874408 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 23:51:29.874611 update_engine[1463]: I20250515 23:51:29.874588 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 23:51:29.874633 update_engine[1463]: I20250515 23:51:29.874614 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 23:51:29.874633 update_engine[1463]: I20250515 23:51:29.874620 1463 omaha_request_action.cc:272] Request: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: May 15 23:51:29.874633 update_engine[1463]: I20250515 23:51:29.874625 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 23:51:29.874809 update_engine[1463]: I20250515 23:51:29.874760 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 23:51:29.874992 update_engine[1463]: I20250515 23:51:29.874918 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 23:51:29.875046 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 23:51:29.876316 update_engine[1463]: E20250515 23:51:29.876276 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876329 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876338 1463 omaha_request_action.cc:617] Omaha request response: May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876344 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876350 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876354 1463 update_attempter.cc:306] Processing Done. May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876360 1463 update_attempter.cc:310] Error event sent. May 15 23:51:29.876393 update_engine[1463]: I20250515 23:51:29.876368 1463 update_check_scheduler.cc:74] Next update check in 48m32s May 15 23:51:29.876656 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 23:51:30.660410 systemd-networkd[1382]: lxc_health: Gained IPv6LL May 15 23:51:31.731069 systemd[1]: run-containerd-runc-k8s.io-e34a81e22cddaec006dc44eda25916f0950d90898bd534a7281f80114667f8ce-runc.4OWnvh.mount: Deactivated successfully. May 15 23:51:33.930304 kubelet[2719]: E0515 23:51:33.930221 2719 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34122->127.0.0.1:38055: write tcp 127.0.0.1:34122->127.0.0.1:38055: write: broken pipe May 15 23:51:36.085496 kubelet[2719]: E0515 23:51:36.085443 2719 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34130->127.0.0.1:38055: write tcp 127.0.0.1:34130->127.0.0.1:38055: write: broken pipe May 15 23:51:36.251620 sshd[4731]: Connection closed by 139.178.68.195 port 55188 May 15 23:51:36.252432 sshd-session[4665]: pam_unix(sshd:session): session closed for user core May 15 23:51:36.258862 systemd[1]: sshd@23-168.119.108.125:22-139.178.68.195:55188.service: Deactivated successfully. May 15 23:51:36.264334 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:51:36.269894 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. May 15 23:51:36.271726 systemd-logind[1460]: Removed session 22. May 15 23:51:49.466860 containerd[1485]: time="2025-05-15T23:51:49.466634663Z" level=info msg="StopPodSandbox for \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\"" May 15 23:51:49.466860 containerd[1485]: time="2025-05-15T23:51:49.466759549Z" level=info msg="TearDown network for sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" successfully" May 15 23:51:49.466860 containerd[1485]: time="2025-05-15T23:51:49.466777230Z" level=info msg="StopPodSandbox for \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" returns successfully" May 15 23:51:49.468718 containerd[1485]: time="2025-05-15T23:51:49.468287020Z" level=info msg="RemovePodSandbox for \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\"" May 15 23:51:49.468718 containerd[1485]: time="2025-05-15T23:51:49.468394905Z" level=info msg="Forcibly stopping sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\"" May 15 23:51:49.468718 containerd[1485]: time="2025-05-15T23:51:49.468486629Z" level=info msg="TearDown network for sandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" successfully" May 15 23:51:49.472992 containerd[1485]: time="2025-05-15T23:51:49.472749506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 23:51:49.472992 containerd[1485]: time="2025-05-15T23:51:49.472838591Z" level=info msg="RemovePodSandbox \"be0a0d42bc7a8298bc244d2f3ae247a13c20318af9f00c193e81427af2560462\" returns successfully" May 15 23:51:49.473906 containerd[1485]: time="2025-05-15T23:51:49.473516342Z" level=info msg="StopPodSandbox for \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\"" May 15 23:51:49.473906 containerd[1485]: time="2025-05-15T23:51:49.473642548Z" level=info msg="TearDown network for sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" successfully" May 15 23:51:49.473906 containerd[1485]: time="2025-05-15T23:51:49.473658069Z" level=info msg="StopPodSandbox for \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" returns successfully" May 15 23:51:49.474137 containerd[1485]: time="2025-05-15T23:51:49.474020005Z" level=info msg="RemovePodSandbox for \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\"" May 15 23:51:49.474137 containerd[1485]: time="2025-05-15T23:51:49.474066568Z" level=info msg="Forcibly stopping sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\"" May 15 23:51:49.474274 containerd[1485]: time="2025-05-15T23:51:49.474144891Z" level=info msg="TearDown network for sandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" successfully" May 15 23:51:49.478235 containerd[1485]: time="2025-05-15T23:51:49.478161917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 23:51:49.478393 containerd[1485]: time="2025-05-15T23:51:49.478248961Z" level=info msg="RemovePodSandbox \"5a00a059f7b8ab37d2a757088b6bdf51d59fc97abd224185c06cace65e3e8a3f\" returns successfully" May 15 23:51:51.399569 kubelet[2719]: E0515 23:51:51.399344 2719 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44320->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-3-n-32b6392e63.183fd864311ef98c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-3-n-32b6392e63,UID:25310ed763ce8c54e43aa2cd3eae5ae6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-n-32b6392e63,},FirstTimestamp:2025-05-15 23:51:45.275484556 +0000 UTC m=+355.941616086,LastTimestamp:2025-05-15 23:51:45.275484556 +0000 UTC m=+355.941616086,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-n-32b6392e63,}" May 15 23:51:51.833477 systemd[1]: cri-containerd-d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92.scope: Deactivated successfully. May 15 23:51:51.834223 systemd[1]: cri-containerd-d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92.scope: Consumed 5.809s CPU time, 17.9M memory peak, 0B memory swap peak. May 15 23:51:51.857856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92-rootfs.mount: Deactivated successfully. May 15 23:51:51.864250 containerd[1485]: time="2025-05-15T23:51:51.864048285Z" level=info msg="shim disconnected" id=d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92 namespace=k8s.io May 15 23:51:51.864250 containerd[1485]: time="2025-05-15T23:51:51.864115929Z" level=warning msg="cleaning up after shim disconnected" id=d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92 namespace=k8s.io May 15 23:51:51.864250 containerd[1485]: time="2025-05-15T23:51:51.864125929Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:52.296159 kubelet[2719]: E0515 23:51:52.296027 2719 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44538->10.0.0.2:2379: read: connection timed out" May 15 23:51:52.300839 systemd[1]: cri-containerd-45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7.scope: Deactivated successfully. May 15 23:51:52.302319 systemd[1]: cri-containerd-45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7.scope: Consumed 5.630s CPU time, 13.5M memory peak, 0B memory swap peak. May 15 23:51:52.328841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7-rootfs.mount: Deactivated successfully. May 15 23:51:52.334364 containerd[1485]: time="2025-05-15T23:51:52.334125696Z" level=info msg="shim disconnected" id=45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7 namespace=k8s.io May 15 23:51:52.334364 containerd[1485]: time="2025-05-15T23:51:52.334206420Z" level=warning msg="cleaning up after shim disconnected" id=45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7 namespace=k8s.io May 15 23:51:52.334364 containerd[1485]: time="2025-05-15T23:51:52.334215420Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:52.492865 kubelet[2719]: I0515 23:51:52.491761 2719 scope.go:117] "RemoveContainer" containerID="d653e2893b33d7bedbade12125585a25b0a5a421e5d8f9463ad572d34ac58b92" May 15 23:51:52.492865 kubelet[2719]: I0515 23:51:52.492859 2719 scope.go:117] "RemoveContainer" containerID="45c781da636314a6a2943afa15020c6ee0af25a45a20d594b9ff168843159ab7" May 15 23:51:52.495479 containerd[1485]: time="2025-05-15T23:51:52.495279495Z" level=info msg="CreateContainer within sandbox \"d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 15 23:51:52.496953 containerd[1485]: time="2025-05-15T23:51:52.496916651Z" level=info msg="CreateContainer within sandbox \"37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 15 23:51:52.512600 containerd[1485]: time="2025-05-15T23:51:52.512516094Z" level=info msg="CreateContainer within sandbox \"d9f0a0d83d74cde6019b1534a3ef50cc29ec81d08d64c1b8c51163d144b51958\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"351b2ca9695f1c09e952290b65a25b873ee4f56236d0b4038c76b73e016fc0d7\"" May 15 23:51:52.513521 containerd[1485]: time="2025-05-15T23:51:52.513467259Z" level=info msg="StartContainer for \"351b2ca9695f1c09e952290b65a25b873ee4f56236d0b4038c76b73e016fc0d7\"" May 15 23:51:52.518621 containerd[1485]: time="2025-05-15T23:51:52.518563495Z" level=info msg="CreateContainer within sandbox \"37223b2721617fd8cfb6065e341deff92154b6f0300599fb29fc9e4ad4c9993a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e0bbe5b08d27e8a931ff5a28e99ec4ac921ad3d8c58476affe54bb6931aa994b\"" May 15 23:51:52.519330 containerd[1485]: time="2025-05-15T23:51:52.519280768Z" level=info msg="StartContainer for \"e0bbe5b08d27e8a931ff5a28e99ec4ac921ad3d8c58476affe54bb6931aa994b\"" May 15 23:51:52.558595 systemd[1]: Started cri-containerd-351b2ca9695f1c09e952290b65a25b873ee4f56236d0b4038c76b73e016fc0d7.scope - libcontainer container 351b2ca9695f1c09e952290b65a25b873ee4f56236d0b4038c76b73e016fc0d7. May 15 23:51:52.564259 systemd[1]: Started cri-containerd-e0bbe5b08d27e8a931ff5a28e99ec4ac921ad3d8c58476affe54bb6931aa994b.scope - libcontainer container e0bbe5b08d27e8a931ff5a28e99ec4ac921ad3d8c58476affe54bb6931aa994b. May 15 23:51:52.616284 containerd[1485]: time="2025-05-15T23:51:52.615572077Z" level=info msg="StartContainer for \"351b2ca9695f1c09e952290b65a25b873ee4f56236d0b4038c76b73e016fc0d7\" returns successfully" May 15 23:51:52.622011 containerd[1485]: time="2025-05-15T23:51:52.621890210Z" level=info msg="StartContainer for \"e0bbe5b08d27e8a931ff5a28e99ec4ac921ad3d8c58476affe54bb6931aa994b\" returns successfully"