May 14 23:50:22.891463 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:50:22.891489 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:50:22.891500 kernel: KASLR enabled May 14 23:50:22.891506 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 14 23:50:22.891512 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 May 14 23:50:22.891518 kernel: random: crng init done May 14 23:50:22.891526 kernel: secureboot: Secure boot disabled May 14 23:50:22.891531 kernel: ACPI: Early table checksum verification disabled May 14 23:50:22.891537 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 14 23:50:22.891546 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 14 23:50:22.891552 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891558 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891564 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891570 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891577 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891585 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891592 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891599 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891605 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:50:22.891612 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 14 23:50:22.891618 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 14 23:50:22.891624 kernel: NUMA: Failed to initialise from firmware May 14 23:50:22.891631 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 14 23:50:22.891637 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] May 14 23:50:22.891643 kernel: Zone ranges: May 14 23:50:22.891651 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 14 23:50:22.891658 kernel: DMA32 empty May 14 23:50:22.891664 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 14 23:50:22.891671 kernel: Movable zone start for each node May 14 23:50:22.891677 kernel: Early memory node ranges May 14 23:50:22.891683 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] May 14 23:50:22.891690 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] May 14 23:50:22.891696 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] May 14 23:50:22.893781 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 14 23:50:22.893793 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 14 23:50:22.893800 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 14 23:50:22.893807 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 14 23:50:22.893820 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 14 23:50:22.893827 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 14 23:50:22.893834 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 14 23:50:22.893844 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 14 23:50:22.893851 kernel: psci: probing for conduit method from ACPI. May 14 23:50:22.893858 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:50:22.893868 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:50:22.893877 kernel: psci: Trusted OS migration not required May 14 23:50:22.893884 kernel: psci: SMC Calling Convention v1.1 May 14 23:50:22.893891 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 23:50:22.893898 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:50:22.893904 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:50:22.893912 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:50:22.893919 kernel: Detected PIPT I-cache on CPU0 May 14 23:50:22.893963 kernel: CPU features: detected: GIC system register CPU interface May 14 23:50:22.893972 kernel: CPU features: detected: Hardware dirty bit management May 14 23:50:22.893982 kernel: CPU features: detected: Spectre-v4 May 14 23:50:22.893989 kernel: CPU features: detected: Spectre-BHB May 14 23:50:22.893996 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:50:22.894003 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:50:22.894010 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:50:22.894017 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:50:22.894025 kernel: alternatives: applying boot alternatives May 14 23:50:22.894034 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:50:22.894047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:50:22.894055 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:50:22.894062 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:50:22.894071 kernel: Fallback order for Node 0: 0 May 14 23:50:22.894078 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 14 23:50:22.894085 kernel: Policy zone: Normal May 14 23:50:22.894092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:50:22.894098 kernel: software IO TLB: area num 2. May 14 23:50:22.894105 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 14 23:50:22.894113 kernel: Memory: 3883828K/4096000K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 212172K reserved, 0K cma-reserved) May 14 23:50:22.894120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:50:22.894127 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:50:22.894134 kernel: rcu: RCU event tracing is enabled. May 14 23:50:22.894141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:50:22.894149 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:50:22.894157 kernel: Tracing variant of Tasks RCU enabled. May 14 23:50:22.894164 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:50:22.894171 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:50:22.894178 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:50:22.894185 kernel: GICv3: 256 SPIs implemented May 14 23:50:22.894192 kernel: GICv3: 0 Extended SPIs implemented May 14 23:50:22.894198 kernel: Root IRQ handler: gic_handle_irq May 14 23:50:22.894210 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:50:22.894218 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 23:50:22.894225 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 23:50:22.894231 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:50:22.894243 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 14 23:50:22.894250 kernel: GICv3: using LPI property table @0x00000001000e0000 May 14 23:50:22.894257 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 14 23:50:22.894264 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:50:22.894271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:50:22.894278 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:50:22.894285 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:50:22.894292 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:50:22.894299 kernel: Console: colour dummy device 80x25 May 14 23:50:22.894306 kernel: ACPI: Core revision 20230628 May 14 23:50:22.894315 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:50:22.894325 kernel: pid_max: default: 32768 minimum: 301 May 14 23:50:22.894332 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:50:22.894339 kernel: landlock: Up and running. May 14 23:50:22.894347 kernel: SELinux: Initializing. May 14 23:50:22.894354 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:50:22.894361 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:50:22.894369 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 14 23:50:22.894377 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:50:22.894385 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:50:22.894394 kernel: rcu: Hierarchical SRCU implementation. May 14 23:50:22.894402 kernel: rcu: Max phase no-delay instances is 400. May 14 23:50:22.894410 kernel: Platform MSI: ITS@0x8080000 domain created May 14 23:50:22.894417 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 23:50:22.894424 kernel: Remapping and enabling EFI services. May 14 23:50:22.894431 kernel: smp: Bringing up secondary CPUs ... May 14 23:50:22.894438 kernel: Detected PIPT I-cache on CPU1 May 14 23:50:22.894446 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 23:50:22.894453 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 14 23:50:22.894462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:50:22.894471 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:50:22.894484 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:50:22.894495 kernel: SMP: Total of 2 processors activated. May 14 23:50:22.894502 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:50:22.894510 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:50:22.894518 kernel: CPU features: detected: Common not Private translations May 14 23:50:22.894526 kernel: CPU features: detected: CRC32 instructions May 14 23:50:22.894535 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 23:50:22.894543 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:50:22.894553 kernel: CPU features: detected: LSE atomic instructions May 14 23:50:22.894561 kernel: CPU features: detected: Privileged Access Never May 14 23:50:22.894569 kernel: CPU features: detected: RAS Extension Support May 14 23:50:22.894577 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 23:50:22.894585 kernel: CPU: All CPU(s) started at EL1 May 14 23:50:22.894593 kernel: alternatives: applying system-wide alternatives May 14 23:50:22.894601 kernel: devtmpfs: initialized May 14 23:50:22.894621 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:50:22.894629 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:50:22.894636 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:50:22.894644 kernel: SMBIOS 3.0.0 present. May 14 23:50:22.894651 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 14 23:50:22.894660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:50:22.894667 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:50:22.894675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:50:22.894683 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:50:22.894692 kernel: audit: initializing netlink subsys (disabled) May 14 23:50:22.894761 kernel: audit: type=2000 audit(0.010:1): state=initialized audit_enabled=0 res=1 May 14 23:50:22.894771 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:50:22.894779 kernel: cpuidle: using governor menu May 14 23:50:22.894787 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:50:22.894794 kernel: ASID allocator initialised with 32768 entries May 14 23:50:22.894804 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:50:22.894812 kernel: Serial: AMBA PL011 UART driver May 14 23:50:22.894822 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:50:22.894833 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:50:22.894841 kernel: Modules: 509264 pages in range for PLT usage May 14 23:50:22.894849 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:50:22.894857 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:50:22.894866 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:50:22.894875 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:50:22.894882 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:50:22.894891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:50:22.894899 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:50:22.894909 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:50:22.894918 kernel: ACPI: Added _OSI(Module Device) May 14 23:50:22.894936 kernel: ACPI: Added _OSI(Processor Device) May 14 23:50:22.894945 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:50:22.894954 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:50:22.894962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:50:22.895029 kernel: ACPI: Interpreter enabled May 14 23:50:22.895040 kernel: ACPI: Using GIC for interrupt routing May 14 23:50:22.895048 kernel: ACPI: MCFG table detected, 1 entries May 14 23:50:22.895060 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 23:50:22.895069 kernel: printk: console [ttyAMA0] enabled May 14 23:50:22.895078 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:50:22.895266 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:50:22.895345 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:50:22.895414 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:50:22.895484 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 23:50:22.895557 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 23:50:22.895568 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 23:50:22.895577 kernel: PCI host bridge to bus 0000:00 May 14 23:50:22.895652 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 23:50:22.897797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:50:22.897901 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 23:50:22.897994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:50:22.898097 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 23:50:22.898192 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 14 23:50:22.898263 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 14 23:50:22.898341 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 14 23:50:22.898418 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.898506 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 14 23:50:22.898587 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.898660 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 14 23:50:22.898751 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.898821 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 14 23:50:22.898898 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899020 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 14 23:50:22.899129 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899216 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 14 23:50:22.899295 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899360 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 14 23:50:22.899483 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899552 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 14 23:50:22.899626 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899734 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 14 23:50:22.899817 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 14 23:50:22.899883 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 14 23:50:22.899975 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 14 23:50:22.900044 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 14 23:50:22.900119 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 14 23:50:22.900191 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 14 23:50:22.900257 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:50:22.900327 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 14 23:50:22.900420 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 14 23:50:22.900502 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 14 23:50:22.900581 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 14 23:50:22.900653 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 14 23:50:22.901412 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 14 23:50:22.901515 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 14 23:50:22.901583 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 14 23:50:22.901665 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 14 23:50:22.901813 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 14 23:50:22.901900 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 14 23:50:22.902010 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 14 23:50:22.902098 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 14 23:50:22.902178 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 14 23:50:22.902269 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 14 23:50:22.902343 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 14 23:50:22.902411 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 14 23:50:22.902498 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 14 23:50:22.902573 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 23:50:22.902650 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 14 23:50:22.903475 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 14 23:50:22.903593 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 23:50:22.903675 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 23:50:22.903841 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 14 23:50:22.903976 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 23:50:22.904055 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 14 23:50:22.904133 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 23:50:22.904207 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 23:50:22.904278 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 14 23:50:22.904343 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 23:50:22.904410 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 14 23:50:22.904474 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 14 23:50:22.904543 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 14 23:50:22.904616 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 14 23:50:22.904690 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 14 23:50:22.904794 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 14 23:50:22.904867 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 23:50:22.904948 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 14 23:50:22.905027 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 14 23:50:22.905120 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 23:50:22.905189 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 14 23:50:22.905254 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 14 23:50:22.905327 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 23:50:22.905391 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 14 23:50:22.905457 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 14 23:50:22.905524 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 14 23:50:22.905594 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:50:22.905664 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 14 23:50:22.907847 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:50:22.907991 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 14 23:50:22.908067 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:50:22.908138 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 14 23:50:22.908202 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:50:22.908278 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 14 23:50:22.908344 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:50:22.908409 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 14 23:50:22.908473 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:50:22.908544 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 14 23:50:22.908608 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:50:22.908674 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 14 23:50:22.908765 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:50:22.908839 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 14 23:50:22.908905 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:50:22.908995 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 14 23:50:22.909062 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 14 23:50:22.909128 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 14 23:50:22.909191 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 14 23:50:22.909261 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 14 23:50:22.909325 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 14 23:50:22.909390 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 14 23:50:22.909456 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 14 23:50:22.909520 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 14 23:50:22.909587 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 14 23:50:22.909652 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 14 23:50:22.911786 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 14 23:50:22.911885 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 14 23:50:22.911980 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 14 23:50:22.912056 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 14 23:50:22.912127 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 14 23:50:22.912200 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 14 23:50:22.912267 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 14 23:50:22.912335 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 14 23:50:22.912405 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 14 23:50:22.912479 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 14 23:50:22.912561 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 14 23:50:22.912628 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:50:22.912694 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 14 23:50:22.913479 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 23:50:22.913547 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 14 23:50:22.913610 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 14 23:50:22.913673 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:50:22.913821 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 14 23:50:22.913899 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 23:50:22.913986 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 14 23:50:22.914055 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 14 23:50:22.914119 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:50:22.914196 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 14 23:50:22.914263 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 14 23:50:22.914329 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 23:50:22.914392 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 14 23:50:22.914454 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 14 23:50:22.914517 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:50:22.914590 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 14 23:50:22.914665 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 23:50:22.914754 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 14 23:50:22.914821 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 14 23:50:22.914884 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:50:22.914973 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 14 23:50:22.915043 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 14 23:50:22.915108 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 23:50:22.915172 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 14 23:50:22.915235 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 14 23:50:22.915304 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:50:22.915375 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 14 23:50:22.915443 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 14 23:50:22.915507 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 23:50:22.915571 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 14 23:50:22.915639 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 14 23:50:22.915723 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:50:22.915819 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 14 23:50:22.915893 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 14 23:50:22.916016 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 14 23:50:22.916089 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 23:50:22.916154 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 14 23:50:22.916232 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 14 23:50:22.916296 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:50:22.916377 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 23:50:22.916448 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 14 23:50:22.916512 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 14 23:50:22.916575 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:50:22.916648 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 23:50:22.916766 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 14 23:50:22.916837 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 14 23:50:22.916902 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:50:22.916985 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 23:50:22.917051 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:50:22.917113 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 23:50:22.917191 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 14 23:50:22.917253 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 14 23:50:22.917312 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:50:22.917378 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 14 23:50:22.917438 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 14 23:50:22.917512 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:50:22.917601 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 14 23:50:22.917670 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 14 23:50:22.917814 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:50:22.917899 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 14 23:50:22.917973 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 14 23:50:22.918040 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:50:22.918120 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 14 23:50:22.918185 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 14 23:50:22.918251 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:50:22.918321 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 14 23:50:22.918380 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 14 23:50:22.918439 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:50:22.918526 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 14 23:50:22.918592 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 14 23:50:22.918663 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:50:22.918761 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 14 23:50:22.918832 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 14 23:50:22.918892 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:50:22.919008 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 14 23:50:22.919081 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 14 23:50:22.919153 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:50:22.919168 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:50:22.919178 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:50:22.919189 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:50:22.919197 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:50:22.919205 kernel: iommu: Default domain type: Translated May 14 23:50:22.919216 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:50:22.919224 kernel: efivars: Registered efivars operations May 14 23:50:22.919232 kernel: vgaarb: loaded May 14 23:50:22.919240 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:50:22.919248 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:50:22.919256 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:50:22.919266 kernel: pnp: PnP ACPI init May 14 23:50:22.919347 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 23:50:22.919359 kernel: pnp: PnP ACPI: found 1 devices May 14 23:50:22.919367 kernel: NET: Registered PF_INET protocol family May 14 23:50:22.919375 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:50:22.919384 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:50:22.919392 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:50:22.919401 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:50:22.919409 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:50:22.919420 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:50:22.919427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:50:22.919436 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:50:22.919443 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:50:22.919518 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 14 23:50:22.919529 kernel: PCI: CLS 0 bytes, default 64 May 14 23:50:22.919537 kernel: kvm [1]: HYP mode not available May 14 23:50:22.919546 kernel: Initialise system trusted keyrings May 14 23:50:22.919553 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:50:22.919564 kernel: Key type asymmetric registered May 14 23:50:22.919571 kernel: Asymmetric key parser 'x509' registered May 14 23:50:22.919579 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:50:22.919587 kernel: io scheduler mq-deadline registered May 14 23:50:22.919595 kernel: io scheduler kyber registered May 14 23:50:22.919603 kernel: io scheduler bfq registered May 14 23:50:22.919616 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 14 23:50:22.919692 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 14 23:50:22.919820 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 14 23:50:22.919887 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.919976 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 14 23:50:22.920045 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 14 23:50:22.920108 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.920175 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 14 23:50:22.920248 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 14 23:50:22.920317 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.920394 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 14 23:50:22.920468 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 14 23:50:22.920536 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.920605 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 14 23:50:22.920677 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 14 23:50:22.920775 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.920847 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 14 23:50:22.920913 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 14 23:50:22.921018 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.921088 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 14 23:50:22.921160 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 14 23:50:22.921224 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.921292 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 14 23:50:22.921356 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 14 23:50:22.921419 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.921430 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 14 23:50:22.921495 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 14 23:50:22.921568 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 14 23:50:22.921631 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:50:22.921641 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:50:22.921649 kernel: ACPI: button: Power Button [PWRB] May 14 23:50:22.921657 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:50:22.921803 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 14 23:50:22.921877 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 14 23:50:22.921892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:50:22.921901 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 14 23:50:22.921981 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 14 23:50:22.921994 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 14 23:50:22.922002 kernel: thunder_xcv, ver 1.0 May 14 23:50:22.922010 kernel: thunder_bgx, ver 1.0 May 14 23:50:22.922018 kernel: nicpf, ver 1.0 May 14 23:50:22.922026 kernel: nicvf, ver 1.0 May 14 23:50:22.922100 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:50:22.922163 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:50:22 UTC (1747266622) May 14 23:50:22.922174 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:50:22.922182 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 23:50:22.922190 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:50:22.922198 kernel: watchdog: Hard watchdog permanently disabled May 14 23:50:22.922206 kernel: NET: Registered PF_INET6 protocol family May 14 23:50:22.922214 kernel: Segment Routing with IPv6 May 14 23:50:22.922222 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:50:22.922232 kernel: NET: Registered PF_PACKET protocol family May 14 23:50:22.922240 kernel: Key type dns_resolver registered May 14 23:50:22.922248 kernel: registered taskstats version 1 May 14 23:50:22.922256 kernel: Loading compiled-in X.509 certificates May 14 23:50:22.922264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:50:22.922272 kernel: Key type .fscrypt registered May 14 23:50:22.922280 kernel: Key type fscrypt-provisioning registered May 14 23:50:22.922287 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:50:22.922295 kernel: ima: Allocated hash algorithm: sha1 May 14 23:50:22.922305 kernel: ima: No architecture policies found May 14 23:50:22.922313 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:50:22.922321 kernel: clk: Disabling unused clocks May 14 23:50:22.922328 kernel: Freeing unused kernel memory: 38336K May 14 23:50:22.922336 kernel: Run /init as init process May 14 23:50:22.922344 kernel: with arguments: May 14 23:50:22.922352 kernel: /init May 14 23:50:22.922359 kernel: with environment: May 14 23:50:22.922367 kernel: HOME=/ May 14 23:50:22.922376 kernel: TERM=linux May 14 23:50:22.922383 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:50:22.922392 systemd[1]: Successfully made /usr/ read-only. May 14 23:50:22.922404 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:50:22.922412 systemd[1]: Detected virtualization kvm. May 14 23:50:22.922420 systemd[1]: Detected architecture arm64. May 14 23:50:22.922428 systemd[1]: Running in initrd. May 14 23:50:22.922438 systemd[1]: No hostname configured, using default hostname. May 14 23:50:22.922447 systemd[1]: Hostname set to . May 14 23:50:22.922455 systemd[1]: Initializing machine ID from VM UUID. May 14 23:50:22.922463 systemd[1]: Queued start job for default target initrd.target. May 14 23:50:22.922471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:50:22.922480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:50:22.922489 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:50:22.922497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:50:22.922508 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:50:22.922517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:50:22.922528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:50:22.922537 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:50:22.922545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:50:22.922553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:50:22.922562 systemd[1]: Reached target paths.target - Path Units. May 14 23:50:22.922572 systemd[1]: Reached target slices.target - Slice Units. May 14 23:50:22.922580 systemd[1]: Reached target swap.target - Swaps. May 14 23:50:22.922589 systemd[1]: Reached target timers.target - Timer Units. May 14 23:50:22.922607 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:50:22.922619 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:50:22.922628 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:50:22.922636 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:50:22.922645 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:50:22.922653 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:50:22.922663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:50:22.922672 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:50:22.922680 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:50:22.922689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:50:22.922705 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:50:22.922715 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:50:22.922723 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:50:22.922732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:50:22.922743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:50:22.922751 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:50:22.922760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:50:22.922796 systemd-journald[236]: Collecting audit messages is disabled. May 14 23:50:22.922819 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:50:22.922828 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:50:22.922837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:50:22.922845 kernel: Bridge firewalling registered May 14 23:50:22.922853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:22.922863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:50:22.922872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:50:22.922881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:22.922889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:50:22.922900 systemd-journald[236]: Journal started May 14 23:50:22.922919 systemd-journald[236]: Runtime Journal (/run/log/journal/671d14f7a98b4d118a739f7cbab01504) is 8M, max 76.6M, 68.6M free. May 14 23:50:22.883417 systemd-modules-load[237]: Inserted module 'overlay' May 14 23:50:22.904204 systemd-modules-load[237]: Inserted module 'br_netfilter' May 14 23:50:22.933571 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:50:22.944918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:50:22.947880 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:50:22.953400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:50:22.958298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:22.962744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:50:22.971182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:50:22.972111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:50:22.977914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:50:22.986125 dracut-cmdline[272]: dracut-dracut-053 May 14 23:50:22.990858 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:50:23.011217 systemd-resolved[275]: Positive Trust Anchors: May 14 23:50:23.011235 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:50:23.011266 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:50:23.017471 systemd-resolved[275]: Defaulting to hostname 'linux'. May 14 23:50:23.018711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:50:23.019451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:50:23.078796 kernel: SCSI subsystem initialized May 14 23:50:23.083747 kernel: Loading iSCSI transport class v2.0-870. May 14 23:50:23.091748 kernel: iscsi: registered transport (tcp) May 14 23:50:23.104755 kernel: iscsi: registered transport (qla4xxx) May 14 23:50:23.104836 kernel: QLogic iSCSI HBA Driver May 14 23:50:23.149835 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:50:23.162988 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:50:23.184175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:50:23.184234 kernel: device-mapper: uevent: version 1.0.3 May 14 23:50:23.184247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:50:23.232765 kernel: raid6: neonx8 gen() 15503 MB/s May 14 23:50:23.249767 kernel: raid6: neonx4 gen() 13587 MB/s May 14 23:50:23.266734 kernel: raid6: neonx2 gen() 13142 MB/s May 14 23:50:23.283766 kernel: raid6: neonx1 gen() 10368 MB/s May 14 23:50:23.300753 kernel: raid6: int64x8 gen() 6754 MB/s May 14 23:50:23.317791 kernel: raid6: int64x4 gen() 7312 MB/s May 14 23:50:23.334756 kernel: raid6: int64x2 gen() 6064 MB/s May 14 23:50:23.351789 kernel: raid6: int64x1 gen() 5017 MB/s May 14 23:50:23.351886 kernel: raid6: using algorithm neonx8 gen() 15503 MB/s May 14 23:50:23.368774 kernel: raid6: .... xor() 11909 MB/s, rmw enabled May 14 23:50:23.368862 kernel: raid6: using neon recovery algorithm May 14 23:50:23.373983 kernel: xor: measuring software checksum speed May 14 23:50:23.374065 kernel: 8regs : 21584 MB/sec May 14 23:50:23.374090 kernel: 32regs : 21664 MB/sec May 14 23:50:23.374113 kernel: arm64_neon : 25615 MB/sec May 14 23:50:23.374778 kernel: xor: using function: arm64_neon (25615 MB/sec) May 14 23:50:23.425775 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:50:23.438979 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:50:23.448085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:50:23.462934 systemd-udevd[457]: Using default interface naming scheme 'v255'. May 14 23:50:23.466892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:50:23.478315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:50:23.494899 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 14 23:50:23.530448 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:50:23.536089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:50:23.591034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:50:23.600459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:50:23.619737 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:50:23.621263 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:50:23.623652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:50:23.625296 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:50:23.631902 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:50:23.647982 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:50:23.680594 kernel: scsi host0: Virtio SCSI HBA May 14 23:50:23.700731 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:50:23.700813 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 23:50:23.701164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:50:23.701282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:50:23.707024 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:50:23.716578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:50:23.716889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:23.718776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:50:23.727988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:50:23.737002 kernel: sr 0:0:0:0: Power-on or device reset occurred May 14 23:50:23.740770 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 14 23:50:23.740994 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:50:23.741006 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 14 23:50:23.741095 kernel: ACPI: bus type USB registered May 14 23:50:23.742091 kernel: usbcore: registered new interface driver usbfs May 14 23:50:23.742135 kernel: usbcore: registered new interface driver hub May 14 23:50:23.742149 kernel: usbcore: registered new device driver usb May 14 23:50:23.754036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:23.761198 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 23:50:23.761401 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 14 23:50:23.761487 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 14 23:50:23.764539 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 23:50:23.764767 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 14 23:50:23.763969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:50:23.766811 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 14 23:50:23.767718 kernel: hub 1-0:1.0: USB hub found May 14 23:50:23.769745 kernel: hub 1-0:1.0: 4 ports detected May 14 23:50:23.770872 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 23:50:23.774061 kernel: hub 2-0:1.0: USB hub found May 14 23:50:23.774248 kernel: hub 2-0:1.0: 4 ports detected May 14 23:50:23.790169 kernel: sd 0:0:0:1: Power-on or device reset occurred May 14 23:50:23.793120 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 14 23:50:23.793345 kernel: sd 0:0:0:1: [sda] Write Protect is off May 14 23:50:23.793431 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 14 23:50:23.794297 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 23:50:23.800658 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:50:23.800736 kernel: GPT:17805311 != 80003071 May 14 23:50:23.800749 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:50:23.800766 kernel: GPT:17805311 != 80003071 May 14 23:50:23.800775 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:50:23.801746 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:50:23.801977 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:50:23.805748 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 14 23:50:23.853727 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (530) May 14 23:50:23.853786 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (511) May 14 23:50:23.872708 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 23:50:23.885820 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 23:50:23.897030 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 23:50:23.906599 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 23:50:23.907498 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 23:50:23.916984 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:50:23.926608 disk-uuid[580]: Primary Header is updated. May 14 23:50:23.926608 disk-uuid[580]: Secondary Entries is updated. May 14 23:50:23.926608 disk-uuid[580]: Secondary Header is updated. May 14 23:50:23.934735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:50:23.939785 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:50:24.014173 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 14 23:50:24.148807 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 14 23:50:24.148871 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 14 23:50:24.149771 kernel: usbcore: registered new interface driver usbhid May 14 23:50:24.149801 kernel: usbhid: USB HID core driver May 14 23:50:24.258746 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 14 23:50:24.387731 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 14 23:50:24.440749 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 14 23:50:24.945772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:50:24.946120 disk-uuid[581]: The operation has completed successfully. May 14 23:50:25.003613 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:50:25.003732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:50:25.041006 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:50:25.046346 sh[595]: Success May 14 23:50:25.059730 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:50:25.126773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:50:25.135844 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:50:25.138748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:50:25.166109 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:50:25.166195 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:50:25.166218 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:50:25.167431 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:50:25.167513 kernel: BTRFS info (device dm-0): using free space tree May 14 23:50:25.175823 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 23:50:25.180018 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:50:25.181128 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:50:25.192012 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:50:25.198982 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:50:25.218742 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:50:25.218818 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:50:25.219982 kernel: BTRFS info (device sda6): using free space tree May 14 23:50:25.223756 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:50:25.223812 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:50:25.229790 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:50:25.232365 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:50:25.237970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:50:25.340075 ignition[682]: Ignition 2.20.0 May 14 23:50:25.340087 ignition[682]: Stage: fetch-offline May 14 23:50:25.340124 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 14 23:50:25.340132 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:25.340284 ignition[682]: parsed url from cmdline: "" May 14 23:50:25.342529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:50:25.340287 ignition[682]: no config URL provided May 14 23:50:25.340291 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:50:25.340298 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 14 23:50:25.340304 ignition[682]: failed to fetch config: resource requires networking May 14 23:50:25.340484 ignition[682]: Ignition finished successfully May 14 23:50:25.347474 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:50:25.353920 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:50:25.388830 systemd-networkd[781]: lo: Link UP May 14 23:50:25.388841 systemd-networkd[781]: lo: Gained carrier May 14 23:50:25.390588 systemd-networkd[781]: Enumeration completed May 14 23:50:25.390950 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:50:25.391690 systemd[1]: Reached target network.target - Network. May 14 23:50:25.392219 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:25.392223 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:50:25.393569 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:25.393573 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:50:25.395633 systemd-networkd[781]: eth0: Link UP May 14 23:50:25.395636 systemd-networkd[781]: eth0: Gained carrier May 14 23:50:25.395646 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:25.401062 systemd-networkd[781]: eth1: Link UP May 14 23:50:25.401066 systemd-networkd[781]: eth1: Gained carrier May 14 23:50:25.401076 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:25.404033 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:50:25.419506 ignition[784]: Ignition 2.20.0 May 14 23:50:25.419523 ignition[784]: Stage: fetch May 14 23:50:25.419839 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 14 23:50:25.419858 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:25.420010 ignition[784]: parsed url from cmdline: "" May 14 23:50:25.420019 ignition[784]: no config URL provided May 14 23:50:25.420027 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:50:25.420042 ignition[784]: no config at "/usr/lib/ignition/user.ign" May 14 23:50:25.420148 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 14 23:50:25.421060 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 23:50:25.426816 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:50:25.464824 systemd-networkd[781]: eth0: DHCPv4 address 91.99.86.151/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 23:50:25.621630 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 14 23:50:25.627456 ignition[784]: GET result: OK May 14 23:50:25.627558 ignition[784]: parsing config with SHA512: 83c57f0bf862c701715174037fd34d7a207a97fc1a9894aa641203c03fbbc49796bbe23e96791b866287564c625d941933610d707e640a5b47b5d056267a658d May 14 23:50:25.635387 unknown[784]: fetched base config from "system" May 14 23:50:25.635398 unknown[784]: fetched base config from "system" May 14 23:50:25.636257 ignition[784]: fetch: fetch complete May 14 23:50:25.635404 unknown[784]: fetched user config from "hetzner" May 14 23:50:25.636263 ignition[784]: fetch: fetch passed May 14 23:50:25.637811 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:50:25.636324 ignition[784]: Ignition finished successfully May 14 23:50:25.647107 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:50:25.662973 ignition[791]: Ignition 2.20.0 May 14 23:50:25.662988 ignition[791]: Stage: kargs May 14 23:50:25.663234 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 14 23:50:25.663245 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:25.668125 ignition[791]: kargs: kargs passed May 14 23:50:25.668275 ignition[791]: Ignition finished successfully May 14 23:50:25.671221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:50:25.675888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:50:25.689238 ignition[797]: Ignition 2.20.0 May 14 23:50:25.689251 ignition[797]: Stage: disks May 14 23:50:25.689453 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 14 23:50:25.689462 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:25.690523 ignition[797]: disks: disks passed May 14 23:50:25.692522 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:50:25.690580 ignition[797]: Ignition finished successfully May 14 23:50:25.694071 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:50:25.694878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:50:25.696295 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:50:25.697341 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:50:25.698576 systemd[1]: Reached target basic.target - Basic System. May 14 23:50:25.704880 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:50:25.721518 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 23:50:25.726823 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:50:25.736005 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:50:25.787010 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:50:25.788255 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:50:25.790221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:50:25.800901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:50:25.805594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:50:25.809041 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 23:50:25.811052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:50:25.812674 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:50:25.818801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:50:25.822754 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) May 14 23:50:25.828809 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:50:25.828862 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:50:25.828874 kernel: BTRFS info (device sda6): using free space tree May 14 23:50:25.829227 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:50:25.839798 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:50:25.839865 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:50:25.847128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:50:25.885470 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:50:25.886585 coreos-metadata[816]: May 14 23:50:25.885 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 14 23:50:25.888083 coreos-metadata[816]: May 14 23:50:25.887 INFO Fetch successful May 14 23:50:25.888083 coreos-metadata[816]: May 14 23:50:25.887 INFO wrote hostname ci-4230-1-1-n-308caa3ab6 to /sysroot/etc/hostname May 14 23:50:25.891744 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:50:25.897135 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 14 23:50:25.901871 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:50:25.907792 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:50:26.016431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:50:26.022863 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:50:26.026055 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:50:26.038736 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:50:26.058573 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:50:26.060587 ignition[931]: INFO : Ignition 2.20.0 May 14 23:50:26.060587 ignition[931]: INFO : Stage: mount May 14 23:50:26.061841 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:50:26.061841 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:26.064220 ignition[931]: INFO : mount: mount passed May 14 23:50:26.064220 ignition[931]: INFO : Ignition finished successfully May 14 23:50:26.065537 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:50:26.076856 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:50:26.168408 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:50:26.177066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:50:26.201722 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) May 14 23:50:26.204097 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:50:26.204153 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:50:26.204165 kernel: BTRFS info (device sda6): using free space tree May 14 23:50:26.207737 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:50:26.207809 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:50:26.211366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:50:26.230499 ignition[961]: INFO : Ignition 2.20.0 May 14 23:50:26.231306 ignition[961]: INFO : Stage: files May 14 23:50:26.231306 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:50:26.231306 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:26.233411 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 14 23:50:26.233411 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:50:26.233411 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:50:26.236815 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:50:26.238092 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:50:26.238092 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:50:26.237292 unknown[961]: wrote ssh authorized keys file for user: core May 14 23:50:26.240952 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:50:26.240952 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 23:50:26.324416 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:50:26.910944 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:50:26.912877 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:50:26.912877 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:50:27.130136 systemd-networkd[781]: eth1: Gained IPv6LL May 14 23:50:27.386024 systemd-networkd[781]: eth0: Gained IPv6LL May 14 23:50:27.575443 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:50:27.812782 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:50:27.822276 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 23:50:28.077778 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:50:28.291550 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:50:28.291550 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 23:50:28.294226 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 23:50:28.294226 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:50:28.294226 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:50:28.294226 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:50:28.294226 ignition[961]: INFO : files: files passed May 14 23:50:28.294226 ignition[961]: INFO : Ignition finished successfully May 14 23:50:28.296530 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:50:28.305144 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:50:28.306503 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:50:28.315506 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:50:28.315623 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:50:28.324035 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:50:28.324035 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:50:28.326268 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:50:28.329793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:50:28.331025 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:50:28.343103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:50:28.370791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:50:28.371003 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:50:28.372817 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:50:28.374081 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:50:28.375384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:50:28.377990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:50:28.408030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:50:28.414946 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:50:28.428585 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:50:28.429446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:50:28.431004 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:50:28.432153 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:50:28.432282 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:50:28.433817 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:50:28.434479 systemd[1]: Stopped target basic.target - Basic System. May 14 23:50:28.435622 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:50:28.436810 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:50:28.437946 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:50:28.439197 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:50:28.440362 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:50:28.442346 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:50:28.443586 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:50:28.444693 systemd[1]: Stopped target swap.target - Swaps. May 14 23:50:28.445666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:50:28.445812 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:50:28.447313 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:50:28.448520 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:50:28.449694 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:50:28.449823 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:50:28.451082 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:50:28.451256 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:50:28.452977 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:50:28.453163 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:50:28.454264 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:50:28.454428 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:50:28.455372 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 23:50:28.455523 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:50:28.469008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:50:28.469828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:50:28.470024 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:50:28.476084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:50:28.478461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:50:28.478623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:50:28.481203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:50:28.481549 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:50:28.491581 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:50:28.492490 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:50:28.497082 ignition[1013]: INFO : Ignition 2.20.0 May 14 23:50:28.497082 ignition[1013]: INFO : Stage: umount May 14 23:50:28.498640 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:50:28.498640 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:50:28.502125 ignition[1013]: INFO : umount: umount passed May 14 23:50:28.502125 ignition[1013]: INFO : Ignition finished successfully May 14 23:50:28.501560 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:50:28.502012 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:50:28.505294 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:50:28.507084 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:50:28.507200 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:50:28.508937 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:50:28.509022 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:50:28.513036 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:50:28.513145 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:50:28.516120 systemd[1]: Stopped target network.target - Network. May 14 23:50:28.520429 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:50:28.520530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:50:28.522003 systemd[1]: Stopped target paths.target - Path Units. May 14 23:50:28.524810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:50:28.528828 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:50:28.531562 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:50:28.533827 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:50:28.535430 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:50:28.535485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:50:28.538072 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:50:28.538133 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:50:28.540834 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:50:28.540926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:50:28.547874 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:50:28.548012 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:50:28.549972 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:50:28.552474 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:50:28.563248 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:50:28.563399 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:50:28.570564 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:50:28.570954 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:50:28.571167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:50:28.574825 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:50:28.575143 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:50:28.575248 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:50:28.579068 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:50:28.579141 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:50:28.580380 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:50:28.580436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:50:28.586836 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:50:28.587502 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:50:28.587576 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:50:28.589691 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:50:28.589795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:28.591414 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:50:28.591461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:50:28.592190 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:50:28.592233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:50:28.594018 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:50:28.599689 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:50:28.599818 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:50:28.609027 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:50:28.609354 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:50:28.614638 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:50:28.614910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:50:28.617333 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:50:28.617404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:50:28.619332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:50:28.619393 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:50:28.620998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:50:28.621057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:50:28.622973 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:50:28.623030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:50:28.624783 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:50:28.624834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:50:28.631880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:50:28.632525 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:50:28.632585 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:50:28.634372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:50:28.634428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:28.636323 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:50:28.636383 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:50:28.641451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:50:28.641576 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:50:28.643092 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:50:28.653129 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:50:28.662819 systemd[1]: Switching root. May 14 23:50:28.691626 systemd-journald[236]: Journal stopped May 14 23:50:29.634716 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 14 23:50:29.634781 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:50:29.634798 kernel: SELinux: policy capability open_perms=1 May 14 23:50:29.634808 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:50:29.634821 kernel: SELinux: policy capability always_check_network=0 May 14 23:50:29.634831 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:50:29.634841 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:50:29.634850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:50:29.634859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:50:29.634869 kernel: audit: type=1403 audit(1747266628.811:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:50:29.634880 systemd[1]: Successfully loaded SELinux policy in 38.772ms. May 14 23:50:29.634916 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.978ms. May 14 23:50:29.634929 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:50:29.634941 systemd[1]: Detected virtualization kvm. May 14 23:50:29.634956 systemd[1]: Detected architecture arm64. May 14 23:50:29.634967 systemd[1]: Detected first boot. May 14 23:50:29.634977 systemd[1]: Hostname set to . May 14 23:50:29.634987 systemd[1]: Initializing machine ID from VM UUID. May 14 23:50:29.634998 zram_generator::config[1059]: No configuration found. May 14 23:50:29.635013 kernel: NET: Registered PF_VSOCK protocol family May 14 23:50:29.635025 systemd[1]: Populated /etc with preset unit settings. May 14 23:50:29.635037 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:50:29.635047 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:50:29.635061 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:50:29.635078 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:50:29.635088 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:50:29.635099 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:50:29.635110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:50:29.635120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:50:29.635132 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:50:29.635143 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:50:29.635153 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:50:29.635163 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:50:29.635174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:50:29.635185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:50:29.635195 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:50:29.635206 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:50:29.635217 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:50:29.635229 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:50:29.635239 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:50:29.635250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:50:29.635261 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:50:29.635271 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:50:29.635283 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:50:29.635295 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:50:29.635305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:50:29.635316 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:50:29.635326 systemd[1]: Reached target slices.target - Slice Units. May 14 23:50:29.635337 systemd[1]: Reached target swap.target - Swaps. May 14 23:50:29.635348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:50:29.635359 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:50:29.635372 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:50:29.635384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:50:29.635395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:50:29.635406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:50:29.635416 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:50:29.635427 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:50:29.635438 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:50:29.635448 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:50:29.635461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:50:29.635471 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:50:29.635482 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:50:29.635493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:50:29.635509 systemd[1]: Reached target machines.target - Containers. May 14 23:50:29.635521 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:50:29.635533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:50:29.635543 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:50:29.635555 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:50:29.635566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:50:29.635577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:50:29.635587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:50:29.635598 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:50:29.635608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:50:29.635620 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:50:29.635632 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:50:29.635644 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:50:29.635656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:50:29.635666 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:50:29.635677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:50:29.635688 kernel: fuse: init (API version 7.39) May 14 23:50:29.635704 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:50:29.635719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:50:29.635729 kernel: loop: module loaded May 14 23:50:29.635740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:50:29.635753 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:50:29.635764 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:50:29.635774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:50:29.635785 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:50:29.635796 systemd[1]: Stopped verity-setup.service. May 14 23:50:29.635808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:50:29.635818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:50:29.635831 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:50:29.635841 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:50:29.635852 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:50:29.635865 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:50:29.635926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:50:29.636777 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:50:29.636811 kernel: ACPI: bus type drm_connector registered May 14 23:50:29.636825 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:50:29.636835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:50:29.636845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:50:29.636857 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:50:29.636868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:50:29.636899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:50:29.636913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:50:29.636924 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:50:29.636937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:50:29.636947 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:50:29.636959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:50:29.637002 systemd-journald[1130]: Collecting audit messages is disabled. May 14 23:50:29.637028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:50:29.637040 systemd-journald[1130]: Journal started May 14 23:50:29.637064 systemd-journald[1130]: Runtime Journal (/run/log/journal/671d14f7a98b4d118a739f7cbab01504) is 8M, max 76.6M, 68.6M free. May 14 23:50:29.364195 systemd[1]: Queued start job for default target multi-user.target. May 14 23:50:29.373969 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 23:50:29.374526 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:50:29.638839 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:50:29.641715 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:50:29.642853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:50:29.646740 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:50:29.647988 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:50:29.659826 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:50:29.665825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:50:29.668846 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:50:29.670811 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:50:29.670850 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:50:29.674316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:50:29.693076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:50:29.697192 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:50:29.698492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:50:29.709458 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:50:29.717839 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:50:29.721298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:50:29.723997 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:50:29.726912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:50:29.730180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:29.735069 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:50:29.741056 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:50:29.746411 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:50:29.750313 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:50:29.751194 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:50:29.754512 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:50:29.764773 kernel: loop0: detected capacity change from 0 to 201592 May 14 23:50:29.764993 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:50:29.768184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:50:29.779102 systemd-journald[1130]: Time spent on flushing to /var/log/journal/671d14f7a98b4d118a739f7cbab01504 is 34.119ms for 1148 entries. May 14 23:50:29.779102 systemd-journald[1130]: System Journal (/var/log/journal/671d14f7a98b4d118a739f7cbab01504) is 8M, max 584.8M, 576.8M free. May 14 23:50:29.832967 systemd-journald[1130]: Received client request to flush runtime journal. May 14 23:50:29.833030 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:50:29.782607 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:50:29.786870 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:50:29.840978 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:50:29.846389 kernel: loop1: detected capacity change from 0 to 113512 May 14 23:50:29.845204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:29.848469 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:50:29.863308 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:50:29.872046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:50:29.873141 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:50:29.893750 kernel: loop2: detected capacity change from 0 to 8 May 14 23:50:29.908592 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 23:50:29.908613 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 23:50:29.916784 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:50:29.933225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:50:29.979959 kernel: loop4: detected capacity change from 0 to 201592 May 14 23:50:30.007731 kernel: loop5: detected capacity change from 0 to 113512 May 14 23:50:30.024742 kernel: loop6: detected capacity change from 0 to 8 May 14 23:50:30.030921 kernel: loop7: detected capacity change from 0 to 123192 May 14 23:50:30.047780 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 14 23:50:30.048440 (sd-merge)[1206]: Merged extensions into '/usr'. May 14 23:50:30.057592 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:50:30.057791 systemd[1]: Reloading... May 14 23:50:30.172818 zram_generator::config[1234]: No configuration found. May 14 23:50:30.240300 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:50:30.335101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:30.396748 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:50:30.397310 systemd[1]: Reloading finished in 338 ms. May 14 23:50:30.414634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:50:30.417956 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:50:30.430991 systemd[1]: Starting ensure-sysext.service... May 14 23:50:30.436437 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:50:30.453190 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:50:30.457431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:50:30.469195 systemd[1]: Reload requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... May 14 23:50:30.469212 systemd[1]: Reloading... May 14 23:50:30.484141 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:50:30.487037 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:50:30.487768 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:50:30.487991 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. May 14 23:50:30.488039 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. May 14 23:50:30.494139 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:50:30.494278 systemd-tmpfiles[1272]: Skipping /boot May 14 23:50:30.517811 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:50:30.517823 systemd-tmpfiles[1272]: Skipping /boot May 14 23:50:30.521843 systemd-udevd[1274]: Using default interface naming scheme 'v255'. May 14 23:50:30.583740 zram_generator::config[1302]: No configuration found. May 14 23:50:30.741474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:30.786738 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:50:30.822871 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:50:30.822958 systemd[1]: Reloading finished in 353 ms. May 14 23:50:30.832420 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:50:30.840496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:50:30.858641 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 14 23:50:30.867172 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:50:30.871048 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:50:30.873905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:50:30.876113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:50:30.886037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:50:30.891050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:50:30.892219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:50:30.892363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:50:30.895095 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:50:30.900017 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:50:30.919148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:50:30.925048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:50:30.928077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:50:30.929436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:50:30.930661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:50:30.930861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:50:30.938824 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1324) May 14 23:50:30.937942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:50:30.946134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:50:30.948682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:50:30.949589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:50:30.949741 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:50:30.952504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:50:30.954133 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:50:30.954557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:50:30.962667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:50:30.964262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:50:30.965356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:50:30.965456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:50:30.971520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:50:30.974018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:50:30.976676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:50:30.977241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:50:30.987048 systemd[1]: Finished ensure-sysext.service. May 14 23:50:30.989747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:50:30.998962 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:50:31.005126 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:50:31.005326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:50:31.019476 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:50:31.019647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:50:31.021117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:50:31.021867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:50:31.023949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:50:31.027678 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:50:31.029178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:50:31.031126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:50:31.035831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:50:31.042209 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:50:31.054746 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 14 23:50:31.054900 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 23:50:31.054928 kernel: [drm] features: -context_init May 14 23:50:31.065024 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:50:31.066500 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:50:31.074002 augenrules[1423]: No rules May 14 23:50:31.075726 kernel: [drm] number of scanouts: 1 May 14 23:50:31.075842 kernel: [drm] number of cap sets: 0 May 14 23:50:31.076653 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:50:31.078079 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:50:31.083264 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:50:31.090371 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:50:31.095741 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 14 23:50:31.134138 kernel: Console: switching to colour frame buffer device 160x50 May 14 23:50:31.142109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 23:50:31.148724 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 23:50:31.160954 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:50:31.181795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:50:31.242623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:50:31.258031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:50:31.258253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:31.268556 systemd-networkd[1390]: lo: Link UP May 14 23:50:31.268984 systemd-networkd[1390]: lo: Gained carrier May 14 23:50:31.269019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:50:31.272086 systemd-networkd[1390]: Enumeration completed May 14 23:50:31.272557 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:50:31.273291 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:31.273403 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:50:31.273449 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:50:31.274245 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:31.274350 systemd-networkd[1390]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:50:31.275044 systemd-networkd[1390]: eth0: Link UP May 14 23:50:31.275152 systemd-networkd[1390]: eth0: Gained carrier May 14 23:50:31.275226 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:31.275507 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:50:31.277696 systemd-resolved[1391]: Positive Trust Anchors: May 14 23:50:31.277907 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:50:31.277943 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:50:31.280104 systemd-networkd[1390]: eth1: Link UP May 14 23:50:31.280108 systemd-networkd[1390]: eth1: Gained carrier May 14 23:50:31.280128 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:50:31.280905 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:50:31.285683 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:50:31.292086 systemd-resolved[1391]: Using system hostname 'ci-4230-1-1-n-308caa3ab6'. May 14 23:50:31.299035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:50:31.301208 systemd[1]: Reached target network.target - Network. May 14 23:50:31.302580 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:50:31.311508 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:50:31.314044 systemd-networkd[1390]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:50:31.314837 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:50:31.349855 systemd-networkd[1390]: eth0: DHCPv4 address 91.99.86.151/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 23:50:31.350249 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:50:31.351207 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:50:31.355122 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:50:31.361990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:50:31.378146 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:50:31.383259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:50:31.413481 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:50:31.414910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:50:31.415836 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:50:31.416979 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:50:31.417770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:50:31.418685 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:50:31.419558 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:50:31.420374 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:50:31.421193 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:50:31.421233 systemd[1]: Reached target paths.target - Path Units. May 14 23:50:31.421774 systemd[1]: Reached target timers.target - Timer Units. May 14 23:50:31.424126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:50:31.426459 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:50:31.429789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:50:31.430793 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:50:31.431602 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:50:31.447164 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:50:31.448579 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:50:31.458059 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:50:31.460413 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:50:31.461633 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:50:31.462445 systemd[1]: Reached target basic.target - Basic System. May 14 23:50:31.462554 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:50:31.463231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:50:31.463265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:50:31.471303 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:50:31.473950 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:50:31.478167 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:50:31.485763 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:50:31.487949 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:50:31.488556 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:50:31.492358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:50:31.498863 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:50:31.502017 jq[1468]: false May 14 23:50:31.502153 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 14 23:50:31.505587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:50:31.510928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:50:31.523959 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:50:31.527412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:50:31.529092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:50:31.530868 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:50:31.538345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:50:31.541922 dbus-daemon[1467]: [system] SELinux support is enabled May 14 23:50:31.543038 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:50:31.548594 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:50:31.561243 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:50:31.561451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:50:31.564617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:50:31.565132 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:50:31.578382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:50:31.578436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:50:31.581607 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:50:31.581639 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:50:31.582962 coreos-metadata[1466]: May 14 23:50:31.582 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 14 23:50:31.583279 extend-filesystems[1469]: Found loop4 May 14 23:50:31.583279 extend-filesystems[1469]: Found loop5 May 14 23:50:31.583279 extend-filesystems[1469]: Found loop6 May 14 23:50:31.583279 extend-filesystems[1469]: Found loop7 May 14 23:50:31.583279 extend-filesystems[1469]: Found sda May 14 23:50:31.583279 extend-filesystems[1469]: Found sda1 May 14 23:50:31.583279 extend-filesystems[1469]: Found sda2 May 14 23:50:31.583279 extend-filesystems[1469]: Found sda3 May 14 23:50:31.583279 extend-filesystems[1469]: Found usr May 14 23:50:31.600933 extend-filesystems[1469]: Found sda4 May 14 23:50:31.600933 extend-filesystems[1469]: Found sda6 May 14 23:50:31.600933 extend-filesystems[1469]: Found sda7 May 14 23:50:31.600933 extend-filesystems[1469]: Found sda9 May 14 23:50:31.600933 extend-filesystems[1469]: Checking size of /dev/sda9 May 14 23:50:31.600587 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:50:31.612068 jq[1481]: true May 14 23:50:31.612174 coreos-metadata[1466]: May 14 23:50:31.596 INFO Fetch successful May 14 23:50:31.612174 coreos-metadata[1466]: May 14 23:50:31.596 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 14 23:50:31.612174 coreos-metadata[1466]: May 14 23:50:31.597 INFO Fetch successful May 14 23:50:31.611674 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:50:31.613176 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:50:31.641113 jq[1503]: true May 14 23:50:31.655092 extend-filesystems[1469]: Resized partition /dev/sda9 May 14 23:50:31.666919 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) May 14 23:50:31.670677 update_engine[1480]: I20250514 23:50:31.668183 1480 main.cc:92] Flatcar Update Engine starting May 14 23:50:31.678693 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 14 23:50:31.678740 tar[1486]: linux-arm64/LICENSE May 14 23:50:31.678740 tar[1486]: linux-arm64/helm May 14 23:50:31.691135 update_engine[1480]: I20250514 23:50:31.691067 1480 update_check_scheduler.cc:74] Next update check in 7m29s May 14 23:50:31.695210 systemd[1]: Started update-engine.service - Update Engine. May 14 23:50:31.701922 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:50:31.765570 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:50:31.769025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:50:31.821454 bash[1536]: Updated "/home/core/.ssh/authorized_keys" May 14 23:50:31.825258 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:50:31.848783 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1321) May 14 23:50:31.849388 systemd[1]: Starting sshkeys.service... May 14 23:50:31.858030 systemd-logind[1477]: New seat seat0. May 14 23:50:31.868623 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:50:31.868649 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 14 23:50:31.870637 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:50:31.885790 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 23:50:31.891087 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 14 23:50:31.905425 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 23:50:31.925684 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 14 23:50:31.925684 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 May 14 23:50:31.925684 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 14 23:50:31.932574 extend-filesystems[1469]: Resized filesystem in /dev/sda9 May 14 23:50:31.932574 extend-filesystems[1469]: Found sr0 May 14 23:50:31.927307 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:50:31.927565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:50:31.958373 coreos-metadata[1544]: May 14 23:50:31.958 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 14 23:50:31.966990 coreos-metadata[1544]: May 14 23:50:31.966 INFO Fetch successful May 14 23:50:31.978810 unknown[1544]: wrote ssh authorized keys file for user: core May 14 23:50:32.010821 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:50:32.017314 update-ssh-keys[1548]: Updated "/home/core/.ssh/authorized_keys" May 14 23:50:32.018020 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 23:50:32.023053 systemd[1]: Finished sshkeys.service. May 14 23:50:32.052018 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:50:32.062942 containerd[1488]: time="2025-05-14T23:50:32.062797800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:50:32.067766 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:50:32.078156 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:50:32.098888 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:50:32.099134 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:50:32.110140 containerd[1488]: time="2025-05-14T23:50:32.109353280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.110195 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:50:32.113066 containerd[1488]: time="2025-05-14T23:50:32.112738400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:50:32.113268 containerd[1488]: time="2025-05-14T23:50:32.113245920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:50:32.113496 containerd[1488]: time="2025-05-14T23:50:32.113481880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:50:32.113983 containerd[1488]: time="2025-05-14T23:50:32.113719920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:50:32.114277 containerd[1488]: time="2025-05-14T23:50:32.114256680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.114643 containerd[1488]: time="2025-05-14T23:50:32.114620280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:50:32.114965 containerd[1488]: time="2025-05-14T23:50:32.114943920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115269320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115292000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115308000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115317760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115390600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115587360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115745320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115761600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115854360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:50:32.116158 containerd[1488]: time="2025-05-14T23:50:32.115943480Z" level=info msg="metadata content store policy set" policy=shared May 14 23:50:32.124738 containerd[1488]: time="2025-05-14T23:50:32.124442800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:50:32.124738 containerd[1488]: time="2025-05-14T23:50:32.124521560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:50:32.124738 containerd[1488]: time="2025-05-14T23:50:32.124538680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:50:32.124738 containerd[1488]: time="2025-05-14T23:50:32.124559280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:50:32.124738 containerd[1488]: time="2025-05-14T23:50:32.124581160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:50:32.126160 containerd[1488]: time="2025-05-14T23:50:32.125929200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:50:32.126362 containerd[1488]: time="2025-05-14T23:50:32.126314680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:50:32.126625 containerd[1488]: time="2025-05-14T23:50:32.126584040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:50:32.126666 containerd[1488]: time="2025-05-14T23:50:32.126631280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:50:32.126685 containerd[1488]: time="2025-05-14T23:50:32.126664680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126694000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126752960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126780600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126807400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126836560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126861480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126941160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.126967520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127010280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127037040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127065400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127090720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127113600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:50:32.127775 containerd[1488]: time="2025-05-14T23:50:32.127137200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127159800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127183760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127213920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127247000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127270720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127302680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127326320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127353800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127391200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127415880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127439000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:50:32.128203 containerd[1488]: time="2025-05-14T23:50:32.127675440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130427920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130459160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130478480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130492840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130507840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130518680Z" level=info msg="NRI interface is disabled by configuration." May 14 23:50:32.130756 containerd[1488]: time="2025-05-14T23:50:32.130529240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:50:32.130980 containerd[1488]: time="2025-05-14T23:50:32.130913760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:50:32.130980 containerd[1488]: time="2025-05-14T23:50:32.130965600Z" level=info msg="Connect containerd service" May 14 23:50:32.131100 containerd[1488]: time="2025-05-14T23:50:32.131007560Z" level=info msg="using legacy CRI server" May 14 23:50:32.131100 containerd[1488]: time="2025-05-14T23:50:32.131015440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:50:32.131744 containerd[1488]: time="2025-05-14T23:50:32.131258880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:50:32.132171 containerd[1488]: time="2025-05-14T23:50:32.132103800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132331360Z" level=info msg="Start subscribing containerd event" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132388720Z" level=info msg="Start recovering state" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132459280Z" level=info msg="Start event monitor" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132470680Z" level=info msg="Start snapshots syncer" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132479800Z" level=info msg="Start cni network conf syncer for default" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132488280Z" level=info msg="Start streaming server" May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132645720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:50:32.133608 containerd[1488]: time="2025-05-14T23:50:32.132696200Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:50:32.132853 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:50:32.138220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:50:32.141946 containerd[1488]: time="2025-05-14T23:50:32.141887880Z" level=info msg="containerd successfully booted in 0.080725s" May 14 23:50:32.148239 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:50:32.152003 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:50:32.154987 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:50:32.378051 tar[1486]: linux-arm64/README.md May 14 23:50:32.389249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:50:32.762059 systemd-networkd[1390]: eth1: Gained IPv6LL May 14 23:50:32.763928 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:50:32.766097 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:50:32.770461 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:50:32.777972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:32.780958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:50:32.812480 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:50:32.954147 systemd-networkd[1390]: eth0: Gained IPv6LL May 14 23:50:32.954802 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:50:33.543032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:33.543184 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:33.545801 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:50:33.553284 systemd[1]: Startup finished in 797ms (kernel) + 6.125s (initrd) + 4.780s (userspace) = 11.703s. May 14 23:50:34.080620 kubelet[1598]: E0514 23:50:34.080567 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:34.083600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:34.083783 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:34.084296 systemd[1]: kubelet.service: Consumed 850ms CPU time, 250.5M memory peak. May 14 23:50:44.334931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:50:44.340052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:44.489178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:44.489489 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:44.543150 kubelet[1618]: E0514 23:50:44.543069 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:44.546419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:44.546589 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:44.547301 systemd[1]: kubelet.service: Consumed 158ms CPU time, 104.6M memory peak. May 14 23:50:54.632305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:50:54.640181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:54.776184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:54.779286 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:54.829358 kubelet[1633]: E0514 23:50:54.829270 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:54.832520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:54.832755 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:54.833425 systemd[1]: kubelet.service: Consumed 151ms CPU time, 103.8M memory peak. May 14 23:51:03.148258 systemd-timesyncd[1410]: Contacted time server 192.248.187.154:123 (2.flatcar.pool.ntp.org). May 14 23:51:03.148368 systemd-timesyncd[1410]: Initial clock synchronization to Wed 2025-05-14 23:51:03.464852 UTC. May 14 23:51:04.884966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:51:04.898807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:05.029949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:05.033004 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:05.088010 kubelet[1649]: E0514 23:51:05.087950 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:05.090613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:05.090783 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:05.091613 systemd[1]: kubelet.service: Consumed 156ms CPU time, 102.4M memory peak. May 14 23:51:15.132205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:51:15.139061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:15.274306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:15.281919 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:15.330479 kubelet[1664]: E0514 23:51:15.330420 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:15.334468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:15.334694 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:15.335389 systemd[1]: kubelet.service: Consumed 157ms CPU time, 102.3M memory peak. May 14 23:51:16.867866 update_engine[1480]: I20250514 23:51:16.866963 1480 update_attempter.cc:509] Updating boot flags... May 14 23:51:16.913806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1681) May 14 23:51:16.973763 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1680) May 14 23:51:25.382727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 23:51:25.389959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:25.507758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:25.512444 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:25.558363 kubelet[1698]: E0514 23:51:25.558316 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:25.562006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:25.562266 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:25.564044 systemd[1]: kubelet.service: Consumed 152ms CPU time, 102.3M memory peak. May 14 23:51:35.632200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 23:51:35.642337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:35.750798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:35.755667 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:35.800360 kubelet[1712]: E0514 23:51:35.800296 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:35.804907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:35.805457 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:35.806159 systemd[1]: kubelet.service: Consumed 144ms CPU time, 101.3M memory peak. May 14 23:51:45.881636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 23:51:45.888031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:46.006925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:46.007828 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:46.046389 kubelet[1728]: E0514 23:51:46.046318 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:46.049103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:46.049347 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:46.050064 systemd[1]: kubelet.service: Consumed 138ms CPU time, 101.9M memory peak. May 14 23:51:56.131777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 23:51:56.139012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:56.261974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:56.262667 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:56.313368 kubelet[1743]: E0514 23:51:56.313280 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:56.316026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:56.316209 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:56.316763 systemd[1]: kubelet.service: Consumed 148ms CPU time, 101.9M memory peak. May 14 23:52:06.382107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 23:52:06.388988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:06.504791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:06.516358 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:06.559416 kubelet[1757]: E0514 23:52:06.559330 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:06.562231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:06.562418 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:06.563037 systemd[1]: kubelet.service: Consumed 149ms CPU time, 104.1M memory peak. May 14 23:52:16.631814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 23:52:16.639078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:16.753659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:16.758881 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:16.804794 kubelet[1772]: E0514 23:52:16.804731 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:16.808289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:16.808814 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:16.809771 systemd[1]: kubelet.service: Consumed 148ms CPU time, 102.1M memory peak. May 14 23:52:16.970739 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:52:16.982673 systemd[1]: Started sshd@0-91.99.86.151:22-147.75.109.163:35354.service - OpenSSH per-connection server daemon (147.75.109.163:35354). May 14 23:52:17.992191 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 35354 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:17.994502 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:18.003609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:52:18.009111 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:52:18.021683 systemd-logind[1477]: New session 1 of user core. May 14 23:52:18.028062 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:52:18.044236 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:52:18.051123 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:52:18.056491 systemd-logind[1477]: New session c1 of user core. May 14 23:52:18.197894 systemd[1785]: Queued start job for default target default.target. May 14 23:52:18.209000 systemd[1785]: Created slice app.slice - User Application Slice. May 14 23:52:18.209053 systemd[1785]: Reached target paths.target - Paths. May 14 23:52:18.209131 systemd[1785]: Reached target timers.target - Timers. May 14 23:52:18.211664 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:52:18.226474 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:52:18.226695 systemd[1785]: Reached target sockets.target - Sockets. May 14 23:52:18.226778 systemd[1785]: Reached target basic.target - Basic System. May 14 23:52:18.226810 systemd[1785]: Reached target default.target - Main User Target. May 14 23:52:18.226838 systemd[1785]: Startup finished in 160ms. May 14 23:52:18.227031 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:52:18.234144 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:52:18.949245 systemd[1]: Started sshd@1-91.99.86.151:22-147.75.109.163:57852.service - OpenSSH per-connection server daemon (147.75.109.163:57852). May 14 23:52:19.959571 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 57852 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:19.961866 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:19.967484 systemd-logind[1477]: New session 2 of user core. May 14 23:52:19.974093 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:52:20.656771 sshd[1798]: Connection closed by 147.75.109.163 port 57852 May 14 23:52:20.657769 sshd-session[1796]: pam_unix(sshd:session): session closed for user core May 14 23:52:20.664321 systemd[1]: sshd@1-91.99.86.151:22-147.75.109.163:57852.service: Deactivated successfully. May 14 23:52:20.667319 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:52:20.668280 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. May 14 23:52:20.670224 systemd-logind[1477]: Removed session 2. May 14 23:52:20.838340 systemd[1]: Started sshd@2-91.99.86.151:22-147.75.109.163:57862.service - OpenSSH per-connection server daemon (147.75.109.163:57862). May 14 23:52:21.831133 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 57862 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:21.833277 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:21.840545 systemd-logind[1477]: New session 3 of user core. May 14 23:52:21.843510 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:52:22.512766 sshd[1806]: Connection closed by 147.75.109.163 port 57862 May 14 23:52:22.512614 sshd-session[1804]: pam_unix(sshd:session): session closed for user core May 14 23:52:22.518027 systemd[1]: sshd@2-91.99.86.151:22-147.75.109.163:57862.service: Deactivated successfully. May 14 23:52:22.521948 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:52:22.523196 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. May 14 23:52:22.524743 systemd-logind[1477]: Removed session 3. May 14 23:52:22.691530 systemd[1]: Started sshd@3-91.99.86.151:22-147.75.109.163:57870.service - OpenSSH per-connection server daemon (147.75.109.163:57870). May 14 23:52:23.678579 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 57870 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:23.680744 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:23.686128 systemd-logind[1477]: New session 4 of user core. May 14 23:52:23.697014 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:52:24.363003 sshd[1814]: Connection closed by 147.75.109.163 port 57870 May 14 23:52:24.362874 sshd-session[1812]: pam_unix(sshd:session): session closed for user core May 14 23:52:24.367188 systemd[1]: sshd@3-91.99.86.151:22-147.75.109.163:57870.service: Deactivated successfully. May 14 23:52:24.367328 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. May 14 23:52:24.369514 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:52:24.373326 systemd-logind[1477]: Removed session 4. May 14 23:52:24.533515 systemd[1]: Started sshd@4-91.99.86.151:22-147.75.109.163:57874.service - OpenSSH per-connection server daemon (147.75.109.163:57874). May 14 23:52:25.532050 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 57874 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:25.534801 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:25.541922 systemd-logind[1477]: New session 5 of user core. May 14 23:52:25.547939 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:52:26.062087 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:52:26.062385 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:26.086691 sudo[1823]: pam_unix(sudo:session): session closed for user root May 14 23:52:26.245775 sshd[1822]: Connection closed by 147.75.109.163 port 57874 May 14 23:52:26.247373 sshd-session[1820]: pam_unix(sshd:session): session closed for user core May 14 23:52:26.253587 systemd[1]: sshd@4-91.99.86.151:22-147.75.109.163:57874.service: Deactivated successfully. May 14 23:52:26.256567 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:52:26.257768 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. May 14 23:52:26.259246 systemd-logind[1477]: Removed session 5. May 14 23:52:26.424277 systemd[1]: Started sshd@5-91.99.86.151:22-147.75.109.163:57878.service - OpenSSH per-connection server daemon (147.75.109.163:57878). May 14 23:52:26.881512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 23:52:26.888980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:27.029938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:27.031517 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:27.077239 kubelet[1839]: E0514 23:52:27.077157 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:27.080084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:27.080376 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:27.081000 systemd[1]: kubelet.service: Consumed 154ms CPU time, 104.2M memory peak. May 14 23:52:27.443676 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 57878 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:27.446070 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:27.452024 systemd-logind[1477]: New session 6 of user core. May 14 23:52:27.460047 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:52:27.979813 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:52:27.980092 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:27.984334 sudo[1848]: pam_unix(sudo:session): session closed for user root May 14 23:52:27.990968 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:52:27.991356 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:28.017549 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:52:28.051663 augenrules[1870]: No rules May 14 23:52:28.053456 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:52:28.053745 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:52:28.056279 sudo[1847]: pam_unix(sudo:session): session closed for user root May 14 23:52:28.218808 sshd[1846]: Connection closed by 147.75.109.163 port 57878 May 14 23:52:28.219306 sshd-session[1829]: pam_unix(sshd:session): session closed for user core May 14 23:52:28.224029 systemd[1]: sshd@5-91.99.86.151:22-147.75.109.163:57878.service: Deactivated successfully. May 14 23:52:28.226177 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:52:28.228455 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. May 14 23:52:28.229638 systemd-logind[1477]: Removed session 6. May 14 23:52:28.392137 systemd[1]: Started sshd@6-91.99.86.151:22-147.75.109.163:34092.service - OpenSSH per-connection server daemon (147.75.109.163:34092). May 14 23:52:29.371388 sshd[1879]: Accepted publickey for core from 147.75.109.163 port 34092 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:52:29.373473 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:29.379455 systemd-logind[1477]: New session 7 of user core. May 14 23:52:29.386944 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:52:29.891029 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:52:29.891341 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:30.235150 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:52:30.235495 (dockerd)[1900]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:52:30.484566 dockerd[1900]: time="2025-05-14T23:52:30.483068937Z" level=info msg="Starting up" May 14 23:52:30.581781 systemd[1]: var-lib-docker-metacopy\x2dcheck1283742315-merged.mount: Deactivated successfully. May 14 23:52:30.591633 dockerd[1900]: time="2025-05-14T23:52:30.591586727Z" level=info msg="Loading containers: start." May 14 23:52:30.756744 kernel: Initializing XFRM netlink socket May 14 23:52:30.853637 systemd-networkd[1390]: docker0: Link UP May 14 23:52:30.882846 dockerd[1900]: time="2025-05-14T23:52:30.882776159Z" level=info msg="Loading containers: done." May 14 23:52:30.901275 dockerd[1900]: time="2025-05-14T23:52:30.901192979Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:52:30.901690 dockerd[1900]: time="2025-05-14T23:52:30.901317192Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:52:30.901690 dockerd[1900]: time="2025-05-14T23:52:30.901608303Z" level=info msg="Daemon has completed initialization" May 14 23:52:30.943597 dockerd[1900]: time="2025-05-14T23:52:30.943462791Z" level=info msg="API listen on /run/docker.sock" May 14 23:52:30.944065 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:52:31.998540 containerd[1488]: time="2025-05-14T23:52:31.998485588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 23:52:32.801805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895608343.mount: Deactivated successfully. May 14 23:52:33.613876 containerd[1488]: time="2025-05-14T23:52:33.613822025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:33.615199 containerd[1488]: time="2025-05-14T23:52:33.615142399Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233210" May 14 23:52:33.616060 containerd[1488]: time="2025-05-14T23:52:33.616005247Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:33.619864 containerd[1488]: time="2025-05-14T23:52:33.619787311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:33.621139 containerd[1488]: time="2025-05-14T23:52:33.621083883Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.622538769s" May 14 23:52:33.621139 containerd[1488]: time="2025-05-14T23:52:33.621134968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 23:52:33.622677 containerd[1488]: time="2025-05-14T23:52:33.621930009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 23:52:35.598495 containerd[1488]: time="2025-05-14T23:52:35.598359169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:35.600836 containerd[1488]: time="2025-05-14T23:52:35.600641396Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529591" May 14 23:52:35.600836 containerd[1488]: time="2025-05-14T23:52:35.600729445Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:35.604402 containerd[1488]: time="2025-05-14T23:52:35.604324443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:35.606079 containerd[1488]: time="2025-05-14T23:52:35.605920522Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.983951069s" May 14 23:52:35.606079 containerd[1488]: time="2025-05-14T23:52:35.605969007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 23:52:35.606911 containerd[1488]: time="2025-05-14T23:52:35.606720962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 23:52:36.872876 containerd[1488]: time="2025-05-14T23:52:36.872816222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:36.874469 containerd[1488]: time="2025-05-14T23:52:36.874408179Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482193" May 14 23:52:36.875272 containerd[1488]: time="2025-05-14T23:52:36.875188576Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:36.878113 containerd[1488]: time="2025-05-14T23:52:36.878042778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:36.879571 containerd[1488]: time="2025-05-14T23:52:36.879410753Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.272651348s" May 14 23:52:36.879571 containerd[1488]: time="2025-05-14T23:52:36.879460998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 23:52:36.880325 containerd[1488]: time="2025-05-14T23:52:36.880129944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 23:52:37.131976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 23:52:37.139008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:37.254518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:37.258990 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:37.307546 kubelet[2159]: E0514 23:52:37.307334 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:37.310326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:37.310496 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:37.311135 systemd[1]: kubelet.service: Consumed 149ms CPU time, 103.3M memory peak. May 14 23:52:38.075094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265659966.mount: Deactivated successfully. May 14 23:52:38.421346 containerd[1488]: time="2025-05-14T23:52:38.421267902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:38.423071 containerd[1488]: time="2025-05-14T23:52:38.422998389Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370377" May 14 23:52:38.423653 containerd[1488]: time="2025-05-14T23:52:38.423388707Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:38.425743 containerd[1488]: time="2025-05-14T23:52:38.425690410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:38.426590 containerd[1488]: time="2025-05-14T23:52:38.426557575Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.546390907s" May 14 23:52:38.426661 containerd[1488]: time="2025-05-14T23:52:38.426590898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 23:52:38.427572 containerd[1488]: time="2025-05-14T23:52:38.427531709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 23:52:39.043915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120654528.mount: Deactivated successfully. May 14 23:52:39.870556 containerd[1488]: time="2025-05-14T23:52:39.870413981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:39.872811 containerd[1488]: time="2025-05-14T23:52:39.872727355Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" May 14 23:52:39.874358 containerd[1488]: time="2025-05-14T23:52:39.874262471Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:39.878387 containerd[1488]: time="2025-05-14T23:52:39.878287215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:39.880723 containerd[1488]: time="2025-05-14T23:52:39.879434582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.45177286s" May 14 23:52:39.880723 containerd[1488]: time="2025-05-14T23:52:39.879479225Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 23:52:39.882353 containerd[1488]: time="2025-05-14T23:52:39.882292238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:52:40.406557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1160552222.mount: Deactivated successfully. May 14 23:52:40.413337 containerd[1488]: time="2025-05-14T23:52:40.413244616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:40.414949 containerd[1488]: time="2025-05-14T23:52:40.414865331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 14 23:52:40.415689 containerd[1488]: time="2025-05-14T23:52:40.415370640Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:40.418583 containerd[1488]: time="2025-05-14T23:52:40.418508242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:40.419387 containerd[1488]: time="2025-05-14T23:52:40.419348157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 536.992996ms" May 14 23:52:40.419598 containerd[1488]: time="2025-05-14T23:52:40.419492942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:52:40.420235 containerd[1488]: time="2025-05-14T23:52:40.420056205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 23:52:40.990661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797550036.mount: Deactivated successfully. May 14 23:52:43.871649 containerd[1488]: time="2025-05-14T23:52:43.871571619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:43.873870 containerd[1488]: time="2025-05-14T23:52:43.873807545Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" May 14 23:52:43.875766 containerd[1488]: time="2025-05-14T23:52:43.874879853Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:43.878979 containerd[1488]: time="2025-05-14T23:52:43.878889546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:43.881503 containerd[1488]: time="2025-05-14T23:52:43.881345734Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.461257093s" May 14 23:52:43.881503 containerd[1488]: time="2025-05-14T23:52:43.881388610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 23:52:47.381808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 14 23:52:47.392298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:47.521229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:47.526558 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:47.571240 kubelet[2314]: E0514 23:52:47.571191 2314 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:47.574590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:47.575025 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:47.577507 systemd[1]: kubelet.service: Consumed 143ms CPU time, 101.8M memory peak. May 14 23:52:49.336774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:49.336979 systemd[1]: kubelet.service: Consumed 143ms CPU time, 101.8M memory peak. May 14 23:52:49.350218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:49.390871 systemd[1]: Reload requested from client PID 2328 ('systemctl') (unit session-7.scope)... May 14 23:52:49.390892 systemd[1]: Reloading... May 14 23:52:49.532473 zram_generator::config[2373]: No configuration found. May 14 23:52:49.636150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:49.728118 systemd[1]: Reloading finished in 336 ms. May 14 23:52:49.795293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:49.795446 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:52:49.802875 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:49.804414 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:52:49.804890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:49.804980 systemd[1]: kubelet.service: Consumed 100ms CPU time, 91.8M memory peak. May 14 23:52:49.811301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:49.928762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:49.941050 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:52:49.997568 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:52:49.998182 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:52:49.998295 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:52:49.999048 kubelet[2428]: I0514 23:52:49.998971 2428 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:52:51.247484 kubelet[2428]: I0514 23:52:51.247435 2428 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:52:51.247975 kubelet[2428]: I0514 23:52:51.247955 2428 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:52:51.248445 kubelet[2428]: I0514 23:52:51.248421 2428 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:52:51.278038 kubelet[2428]: E0514 23:52:51.277991 2428 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.86.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:51.279045 kubelet[2428]: I0514 23:52:51.279012 2428 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:52:51.293031 kubelet[2428]: E0514 23:52:51.292921 2428 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:52:51.293219 kubelet[2428]: I0514 23:52:51.293200 2428 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:52:51.296940 kubelet[2428]: I0514 23:52:51.296911 2428 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:52:51.298050 kubelet[2428]: I0514 23:52:51.297287 2428 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:52:51.298050 kubelet[2428]: I0514 23:52:51.297319 2428 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-308caa3ab6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:52:51.298050 kubelet[2428]: I0514 23:52:51.297575 2428 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:52:51.298050 kubelet[2428]: I0514 23:52:51.297583 2428 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:52:51.298337 kubelet[2428]: I0514 23:52:51.297819 2428 state_mem.go:36] "Initialized new in-memory state store" May 14 23:52:51.301430 kubelet[2428]: I0514 23:52:51.301403 2428 kubelet.go:446] "Attempting to sync node with API server" May 14 23:52:51.301565 kubelet[2428]: I0514 23:52:51.301553 2428 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:52:51.301635 kubelet[2428]: I0514 23:52:51.301626 2428 kubelet.go:352] "Adding apiserver pod source" May 14 23:52:51.301696 kubelet[2428]: I0514 23:52:51.301686 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:52:51.304156 kubelet[2428]: W0514 23:52:51.304068 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.86.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-308caa3ab6&limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:51.304256 kubelet[2428]: E0514 23:52:51.304188 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.86.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-308caa3ab6&limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:51.305245 kubelet[2428]: I0514 23:52:51.305222 2428 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:52:51.306042 kubelet[2428]: I0514 23:52:51.306017 2428 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:52:51.306246 kubelet[2428]: W0514 23:52:51.306232 2428 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:52:51.307315 kubelet[2428]: I0514 23:52:51.307289 2428 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:52:51.307437 kubelet[2428]: I0514 23:52:51.307426 2428 server.go:1287] "Started kubelet" May 14 23:52:51.307646 kubelet[2428]: W0514 23:52:51.307624 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.86.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:51.307780 kubelet[2428]: E0514 23:52:51.307756 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.86.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:51.314129 kubelet[2428]: E0514 23:52:51.313813 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.86.151:6443/api/v1/namespaces/default/events\": dial tcp 91.99.86.151:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-n-308caa3ab6.183f89deff9f0732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-308caa3ab6,UID:ci-4230-1-1-n-308caa3ab6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-308caa3ab6,},FirstTimestamp:2025-05-14 23:52:51.307398962 +0000 UTC m=+1.360237319,LastTimestamp:2025-05-14 23:52:51.307398962 +0000 UTC m=+1.360237319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-308caa3ab6,}" May 14 23:52:51.315540 kubelet[2428]: I0514 23:52:51.314343 2428 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:52:51.315623 kubelet[2428]: I0514 23:52:51.315551 2428 server.go:490] "Adding debug handlers to kubelet server" May 14 23:52:51.316581 kubelet[2428]: I0514 23:52:51.316504 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:52:51.316784 kubelet[2428]: I0514 23:52:51.316760 2428 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:52:51.316832 kubelet[2428]: I0514 23:52:51.316759 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:52:51.317178 kubelet[2428]: I0514 23:52:51.317151 2428 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:52:51.320881 kubelet[2428]: E0514 23:52:51.320846 2428 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" May 14 23:52:51.320998 kubelet[2428]: I0514 23:52:51.320987 2428 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:52:51.321276 kubelet[2428]: I0514 23:52:51.321256 2428 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:52:51.321558 kubelet[2428]: I0514 23:52:51.321487 2428 reconciler.go:26] "Reconciler: start to sync state" May 14 23:52:51.322419 kubelet[2428]: W0514 23:52:51.322375 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.86.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:51.322671 kubelet[2428]: E0514 23:52:51.322648 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.86.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:51.323025 kubelet[2428]: I0514 23:52:51.323003 2428 factory.go:221] Registration of the systemd container factory successfully May 14 23:52:51.323178 kubelet[2428]: I0514 23:52:51.323160 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:52:51.325659 kubelet[2428]: E0514 23:52:51.325628 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.86.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-308caa3ab6?timeout=10s\": dial tcp 91.99.86.151:6443: connect: connection refused" interval="200ms" May 14 23:52:51.327731 kubelet[2428]: I0514 23:52:51.326142 2428 factory.go:221] Registration of the containerd container factory successfully May 14 23:52:51.344948 kubelet[2428]: I0514 23:52:51.344899 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:52:51.349950 kubelet[2428]: I0514 23:52:51.349911 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:52:51.350135 kubelet[2428]: I0514 23:52:51.350118 2428 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:52:51.350240 kubelet[2428]: I0514 23:52:51.350225 2428 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:52:51.350316 kubelet[2428]: I0514 23:52:51.350303 2428 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:52:51.350496 kubelet[2428]: E0514 23:52:51.350456 2428 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:52:51.361267 kubelet[2428]: W0514 23:52:51.361208 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.86.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:51.361456 kubelet[2428]: E0514 23:52:51.361429 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.86.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:51.363174 kubelet[2428]: I0514 23:52:51.363148 2428 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:52:51.363304 kubelet[2428]: I0514 23:52:51.363292 2428 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:52:51.363365 kubelet[2428]: I0514 23:52:51.363357 2428 state_mem.go:36] "Initialized new in-memory state store" May 14 23:52:51.366424 kubelet[2428]: I0514 23:52:51.366385 2428 policy_none.go:49] "None policy: Start" May 14 23:52:51.366584 kubelet[2428]: I0514 23:52:51.366566 2428 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:52:51.366675 kubelet[2428]: I0514 23:52:51.366660 2428 state_mem.go:35] "Initializing new in-memory state store" May 14 23:52:51.375242 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:52:51.390321 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:52:51.395466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:52:51.406552 kubelet[2428]: I0514 23:52:51.405575 2428 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:52:51.406552 kubelet[2428]: I0514 23:52:51.405995 2428 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:52:51.406552 kubelet[2428]: I0514 23:52:51.406018 2428 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:52:51.406552 kubelet[2428]: I0514 23:52:51.406424 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:52:51.410695 kubelet[2428]: E0514 23:52:51.410657 2428 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:52:51.411075 kubelet[2428]: E0514 23:52:51.411012 2428 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-n-308caa3ab6\" not found" May 14 23:52:51.467551 systemd[1]: Created slice kubepods-burstable-poda9474fde7866eaeef600502affd6952d.slice - libcontainer container kubepods-burstable-poda9474fde7866eaeef600502affd6952d.slice. May 14 23:52:51.483958 kubelet[2428]: E0514 23:52:51.483464 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.486789 systemd[1]: Created slice kubepods-burstable-podc1530ec26300539c2bad8dd7e11c7a72.slice - libcontainer container kubepods-burstable-podc1530ec26300539c2bad8dd7e11c7a72.slice. May 14 23:52:51.489503 kubelet[2428]: E0514 23:52:51.489277 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.492575 systemd[1]: Created slice kubepods-burstable-pod7af6849a88c275f94326fe8de7f99155.slice - libcontainer container kubepods-burstable-pod7af6849a88c275f94326fe8de7f99155.slice. May 14 23:52:51.494524 kubelet[2428]: E0514 23:52:51.494330 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.509564 kubelet[2428]: I0514 23:52:51.508910 2428 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.509564 kubelet[2428]: E0514 23:52:51.509422 2428 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.86.151:6443/api/v1/nodes\": dial tcp 91.99.86.151:6443: connect: connection refused" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.527085 kubelet[2428]: E0514 23:52:51.527022 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.86.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-308caa3ab6?timeout=10s\": dial tcp 91.99.86.151:6443: connect: connection refused" interval="400ms" May 14 23:52:51.622997 kubelet[2428]: I0514 23:52:51.622525 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.622997 kubelet[2428]: I0514 23:52:51.622594 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.622997 kubelet[2428]: I0514 23:52:51.622629 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.622997 kubelet[2428]: I0514 23:52:51.622658 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.622997 kubelet[2428]: I0514 23:52:51.622686 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.623447 kubelet[2428]: I0514 23:52:51.622756 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.623447 kubelet[2428]: I0514 23:52:51.622788 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7af6849a88c275f94326fe8de7f99155-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-308caa3ab6\" (UID: \"7af6849a88c275f94326fe8de7f99155\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.623447 kubelet[2428]: I0514 23:52:51.622817 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.623447 kubelet[2428]: I0514 23:52:51.622857 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.712545 kubelet[2428]: I0514 23:52:51.712463 2428 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.713103 kubelet[2428]: E0514 23:52:51.713047 2428 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.86.151:6443/api/v1/nodes\": dial tcp 91.99.86.151:6443: connect: connection refused" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:51.785342 containerd[1488]: time="2025-05-14T23:52:51.785187337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-308caa3ab6,Uid:a9474fde7866eaeef600502affd6952d,Namespace:kube-system,Attempt:0,}" May 14 23:52:51.791079 containerd[1488]: time="2025-05-14T23:52:51.790915795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-308caa3ab6,Uid:c1530ec26300539c2bad8dd7e11c7a72,Namespace:kube-system,Attempt:0,}" May 14 23:52:51.796022 containerd[1488]: time="2025-05-14T23:52:51.795960369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-308caa3ab6,Uid:7af6849a88c275f94326fe8de7f99155,Namespace:kube-system,Attempt:0,}" May 14 23:52:51.928184 kubelet[2428]: E0514 23:52:51.928123 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.86.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-308caa3ab6?timeout=10s\": dial tcp 91.99.86.151:6443: connect: connection refused" interval="800ms" May 14 23:52:52.116306 kubelet[2428]: I0514 23:52:52.116139 2428 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:52.116719 kubelet[2428]: E0514 23:52:52.116592 2428 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.86.151:6443/api/v1/nodes\": dial tcp 91.99.86.151:6443: connect: connection refused" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:52.131648 kubelet[2428]: W0514 23:52:52.131467 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.86.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:52.131648 kubelet[2428]: E0514 23:52:52.131592 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.86.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:52.329997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326576212.mount: Deactivated successfully. May 14 23:52:52.337150 containerd[1488]: time="2025-05-14T23:52:52.336756472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:52.338764 containerd[1488]: time="2025-05-14T23:52:52.338680777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 14 23:52:52.341290 containerd[1488]: time="2025-05-14T23:52:52.341229172Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:52.343959 containerd[1488]: time="2025-05-14T23:52:52.343909721Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:52.345019 containerd[1488]: time="2025-05-14T23:52:52.344968429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:52:52.347373 containerd[1488]: time="2025-05-14T23:52:52.347307754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:52.348504 containerd[1488]: time="2025-05-14T23:52:52.348313265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.990655ms" May 14 23:52:52.350160 containerd[1488]: time="2025-05-14T23:52:52.349873828Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:52.351325 containerd[1488]: time="2025-05-14T23:52:52.351251880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:52:52.356032 containerd[1488]: time="2025-05-14T23:52:52.355930091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.93566ms" May 14 23:52:52.373226 containerd[1488]: time="2025-05-14T23:52:52.372890819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.847894ms" May 14 23:52:52.470097 kubelet[2428]: W0514 23:52:52.470017 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.86.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-308caa3ab6&limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:52.470097 kubelet[2428]: E0514 23:52:52.470101 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.86.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-308caa3ab6&limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:52.507860 containerd[1488]: time="2025-05-14T23:52:52.506950681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:52:52.507860 containerd[1488]: time="2025-05-14T23:52:52.507749001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:52:52.508201 containerd[1488]: time="2025-05-14T23:52:52.508065466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.508465 containerd[1488]: time="2025-05-14T23:52:52.508402889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.511329 containerd[1488]: time="2025-05-14T23:52:52.511234830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:52:52.511527 containerd[1488]: time="2025-05-14T23:52:52.511330986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:52:52.511527 containerd[1488]: time="2025-05-14T23:52:52.511349545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.511980 containerd[1488]: time="2025-05-14T23:52:52.511919397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.512341 containerd[1488]: time="2025-05-14T23:52:52.512255500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:52:52.512846 containerd[1488]: time="2025-05-14T23:52:52.512318817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:52:52.512846 containerd[1488]: time="2025-05-14T23:52:52.512402853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.513810 containerd[1488]: time="2025-05-14T23:52:52.513608994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:52.537965 systemd[1]: Started cri-containerd-379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de.scope - libcontainer container 379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de. May 14 23:52:52.543225 systemd[1]: Started cri-containerd-bd2d890682eb594334fd0f78e3f3920138258aa768dd10537d89e674a82b49ce.scope - libcontainer container bd2d890682eb594334fd0f78e3f3920138258aa768dd10537d89e674a82b49ce. May 14 23:52:52.546820 systemd[1]: Started cri-containerd-4a41c35dcfb8e2b744fe56e649b5ba0a1c4b1bb10699b4cec0450f37d0365500.scope - libcontainer container 4a41c35dcfb8e2b744fe56e649b5ba0a1c4b1bb10699b4cec0450f37d0365500. May 14 23:52:52.617980 containerd[1488]: time="2025-05-14T23:52:52.617919476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-308caa3ab6,Uid:c1530ec26300539c2bad8dd7e11c7a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de\"" May 14 23:52:52.623882 kubelet[2428]: W0514 23:52:52.623683 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.86.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:52.624976 kubelet[2428]: E0514 23:52:52.624650 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.86.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:52.626767 containerd[1488]: time="2025-05-14T23:52:52.626730963Z" level=info msg="CreateContainer within sandbox \"379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:52:52.630229 containerd[1488]: time="2025-05-14T23:52:52.630191713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-308caa3ab6,Uid:a9474fde7866eaeef600502affd6952d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a41c35dcfb8e2b744fe56e649b5ba0a1c4b1bb10699b4cec0450f37d0365500\"" May 14 23:52:52.635103 containerd[1488]: time="2025-05-14T23:52:52.635030116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-308caa3ab6,Uid:7af6849a88c275f94326fe8de7f99155,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd2d890682eb594334fd0f78e3f3920138258aa768dd10537d89e674a82b49ce\"" May 14 23:52:52.636722 containerd[1488]: time="2025-05-14T23:52:52.636612278Z" level=info msg="CreateContainer within sandbox \"4a41c35dcfb8e2b744fe56e649b5ba0a1c4b1bb10699b4cec0450f37d0365500\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:52:52.640073 containerd[1488]: time="2025-05-14T23:52:52.640040110Z" level=info msg="CreateContainer within sandbox \"bd2d890682eb594334fd0f78e3f3920138258aa768dd10537d89e674a82b49ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:52:52.647928 containerd[1488]: time="2025-05-14T23:52:52.647884445Z" level=info msg="CreateContainer within sandbox \"379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c\"" May 14 23:52:52.650153 containerd[1488]: time="2025-05-14T23:52:52.648823959Z" level=info msg="StartContainer for \"2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c\"" May 14 23:52:52.663246 containerd[1488]: time="2025-05-14T23:52:52.663198134Z" level=info msg="CreateContainer within sandbox \"4a41c35dcfb8e2b744fe56e649b5ba0a1c4b1bb10699b4cec0450f37d0365500\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2eceeb068f5cea4ad4b743bae5350396e8e5c9aa7af8b41e3b93ffd9a959a872\"" May 14 23:52:52.663962 containerd[1488]: time="2025-05-14T23:52:52.663892700Z" level=info msg="StartContainer for \"2eceeb068f5cea4ad4b743bae5350396e8e5c9aa7af8b41e3b93ffd9a959a872\"" May 14 23:52:52.663962 containerd[1488]: time="2025-05-14T23:52:52.663922738Z" level=info msg="CreateContainer within sandbox \"bd2d890682eb594334fd0f78e3f3920138258aa768dd10537d89e674a82b49ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5\"" May 14 23:52:52.664743 containerd[1488]: time="2025-05-14T23:52:52.664605825Z" level=info msg="StartContainer for \"a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5\"" May 14 23:52:52.683097 systemd[1]: Started cri-containerd-2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c.scope - libcontainer container 2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c. May 14 23:52:52.717934 systemd[1]: Started cri-containerd-2eceeb068f5cea4ad4b743bae5350396e8e5c9aa7af8b41e3b93ffd9a959a872.scope - libcontainer container 2eceeb068f5cea4ad4b743bae5350396e8e5c9aa7af8b41e3b93ffd9a959a872. May 14 23:52:52.727065 systemd[1]: Started cri-containerd-a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5.scope - libcontainer container a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5. May 14 23:52:52.730208 kubelet[2428]: E0514 23:52:52.730173 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.86.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-308caa3ab6?timeout=10s\": dial tcp 91.99.86.151:6443: connect: connection refused" interval="1.6s" May 14 23:52:52.757015 containerd[1488]: time="2025-05-14T23:52:52.756949174Z" level=info msg="StartContainer for \"2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c\" returns successfully" May 14 23:52:52.790766 containerd[1488]: time="2025-05-14T23:52:52.790647600Z" level=info msg="StartContainer for \"2eceeb068f5cea4ad4b743bae5350396e8e5c9aa7af8b41e3b93ffd9a959a872\" returns successfully" May 14 23:52:52.808281 containerd[1488]: time="2025-05-14T23:52:52.807095713Z" level=info msg="StartContainer for \"a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5\" returns successfully" May 14 23:52:52.911909 kubelet[2428]: W0514 23:52:52.911836 2428 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.86.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.86.151:6443: connect: connection refused May 14 23:52:52.912093 kubelet[2428]: E0514 23:52:52.911918 2428 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.86.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.86.151:6443: connect: connection refused" logger="UnhandledError" May 14 23:52:52.919190 kubelet[2428]: I0514 23:52:52.919152 2428 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:53.381778 kubelet[2428]: E0514 23:52:53.381439 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:53.384140 kubelet[2428]: E0514 23:52:53.384109 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:53.389633 kubelet[2428]: E0514 23:52:53.389577 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:54.388579 kubelet[2428]: E0514 23:52:54.388391 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:54.389921 kubelet[2428]: E0514 23:52:54.389684 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:55.391966 kubelet[2428]: E0514 23:52:55.391857 2428 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.359911 kubelet[2428]: E0514 23:52:56.359858 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-n-308caa3ab6\" not found" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.473354 kubelet[2428]: E0514 23:52:56.473077 2428 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-1-1-n-308caa3ab6.183f89deff9f0732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-308caa3ab6,UID:ci-4230-1-1-n-308caa3ab6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-308caa3ab6,},FirstTimestamp:2025-05-14 23:52:51.307398962 +0000 UTC m=+1.360237319,LastTimestamp:2025-05-14 23:52:51.307398962 +0000 UTC m=+1.360237319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-308caa3ab6,}" May 14 23:52:56.517418 kubelet[2428]: I0514 23:52:56.516948 2428 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.526716 kubelet[2428]: I0514 23:52:56.525462 2428 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.536811 kubelet[2428]: E0514 23:52:56.536686 2428 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-1-1-n-308caa3ab6.183f89df02dfa78e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-308caa3ab6,UID:ci-4230-1-1-n-308caa3ab6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230-1-1-n-308caa3ab6 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-308caa3ab6,},FirstTimestamp:2025-05-14 23:52:51.361965966 +0000 UTC m=+1.414804323,LastTimestamp:2025-05-14 23:52:51.361965966 +0000 UTC m=+1.414804323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-308caa3ab6,}" May 14 23:52:56.543883 kubelet[2428]: E0514 23:52:56.542814 2428 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.543883 kubelet[2428]: I0514 23:52:56.542858 2428 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.546384 kubelet[2428]: E0514 23:52:56.546344 2428 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.547752 kubelet[2428]: I0514 23:52:56.547720 2428 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:52:56.550817 kubelet[2428]: E0514 23:52:56.550777 2428 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-308caa3ab6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:52:57.316426 kubelet[2428]: I0514 23:52:57.316373 2428 apiserver.go:52] "Watching apiserver" May 14 23:52:57.321883 kubelet[2428]: I0514 23:52:57.321842 2428 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:52:58.894119 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-7.scope)... May 14 23:52:58.894140 systemd[1]: Reloading... May 14 23:52:59.015738 zram_generator::config[2751]: No configuration found. May 14 23:52:59.116956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:59.224050 systemd[1]: Reloading finished in 329 ms. May 14 23:52:59.253304 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:59.266644 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:52:59.267016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:59.267088 systemd[1]: kubelet.service: Consumed 1.844s CPU time, 124.8M memory peak. May 14 23:52:59.275214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:59.426227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:59.426402 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:52:59.479213 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:52:59.479213 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:52:59.479213 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:52:59.479213 kubelet[2793]: I0514 23:52:59.478896 2793 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:52:59.492568 kubelet[2793]: I0514 23:52:59.489880 2793 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:52:59.492568 kubelet[2793]: I0514 23:52:59.489916 2793 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:52:59.492568 kubelet[2793]: I0514 23:52:59.490231 2793 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:52:59.492568 kubelet[2793]: I0514 23:52:59.492030 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:52:59.495797 kubelet[2793]: I0514 23:52:59.495474 2793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:52:59.500565 kubelet[2793]: E0514 23:52:59.500469 2793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:52:59.500565 kubelet[2793]: I0514 23:52:59.500524 2793 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:52:59.504626 kubelet[2793]: I0514 23:52:59.504580 2793 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:52:59.504913 kubelet[2793]: I0514 23:52:59.504872 2793 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:52:59.505461 kubelet[2793]: I0514 23:52:59.504906 2793 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-308caa3ab6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:52:59.505652 kubelet[2793]: I0514 23:52:59.505474 2793 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:52:59.505652 kubelet[2793]: I0514 23:52:59.505498 2793 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:52:59.505652 kubelet[2793]: I0514 23:52:59.505554 2793 state_mem.go:36] "Initialized new in-memory state store" May 14 23:52:59.506623 kubelet[2793]: I0514 23:52:59.506573 2793 kubelet.go:446] "Attempting to sync node with API server" May 14 23:52:59.507102 kubelet[2793]: I0514 23:52:59.507077 2793 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:52:59.507183 kubelet[2793]: I0514 23:52:59.507112 2793 kubelet.go:352] "Adding apiserver pod source" May 14 23:52:59.507183 kubelet[2793]: I0514 23:52:59.507124 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:52:59.518444 kubelet[2793]: I0514 23:52:59.516895 2793 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:52:59.519431 kubelet[2793]: I0514 23:52:59.519100 2793 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:52:59.523619 kubelet[2793]: I0514 23:52:59.522930 2793 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:52:59.524651 kubelet[2793]: I0514 23:52:59.524632 2793 server.go:1287] "Started kubelet" May 14 23:52:59.534870 kubelet[2793]: I0514 23:52:59.525246 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:52:59.535202 kubelet[2793]: I0514 23:52:59.535171 2793 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:52:59.535273 kubelet[2793]: I0514 23:52:59.529761 2793 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:52:59.536247 kubelet[2793]: I0514 23:52:59.529531 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:52:59.543915 kubelet[2793]: I0514 23:52:59.529913 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:52:59.544364 kubelet[2793]: I0514 23:52:59.544343 2793 server.go:490] "Adding debug handlers to kubelet server" May 14 23:52:59.545313 kubelet[2793]: I0514 23:52:59.545288 2793 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:52:59.545622 kubelet[2793]: E0514 23:52:59.545588 2793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-308caa3ab6\" not found" May 14 23:52:59.548387 kubelet[2793]: I0514 23:52:59.548281 2793 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:52:59.548856 kubelet[2793]: I0514 23:52:59.548830 2793 reconciler.go:26] "Reconciler: start to sync state" May 14 23:52:59.551807 kubelet[2793]: I0514 23:52:59.551103 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:52:59.554296 kubelet[2793]: I0514 23:52:59.553567 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:52:59.554296 kubelet[2793]: I0514 23:52:59.553601 2793 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:52:59.554296 kubelet[2793]: I0514 23:52:59.553620 2793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:52:59.554296 kubelet[2793]: I0514 23:52:59.553627 2793 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:52:59.554296 kubelet[2793]: E0514 23:52:59.553669 2793 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:52:59.565440 kubelet[2793]: I0514 23:52:59.565141 2793 factory.go:221] Registration of the systemd container factory successfully May 14 23:52:59.565440 kubelet[2793]: I0514 23:52:59.565248 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:52:59.569511 kubelet[2793]: I0514 23:52:59.569338 2793 factory.go:221] Registration of the containerd container factory successfully May 14 23:52:59.569511 kubelet[2793]: E0514 23:52:59.569374 2793 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:52:59.632240 kubelet[2793]: I0514 23:52:59.632202 2793 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:52:59.632240 kubelet[2793]: I0514 23:52:59.632225 2793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:52:59.632240 kubelet[2793]: I0514 23:52:59.632247 2793 state_mem.go:36] "Initialized new in-memory state store" May 14 23:52:59.632549 kubelet[2793]: I0514 23:52:59.632419 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:52:59.632549 kubelet[2793]: I0514 23:52:59.632430 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:52:59.632549 kubelet[2793]: I0514 23:52:59.632450 2793 policy_none.go:49] "None policy: Start" May 14 23:52:59.632549 kubelet[2793]: I0514 23:52:59.632458 2793 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:52:59.632549 kubelet[2793]: I0514 23:52:59.632466 2793 state_mem.go:35] "Initializing new in-memory state store" May 14 23:52:59.632851 kubelet[2793]: I0514 23:52:59.632605 2793 state_mem.go:75] "Updated machine memory state" May 14 23:52:59.638058 kubelet[2793]: I0514 23:52:59.637388 2793 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:52:59.638058 kubelet[2793]: I0514 23:52:59.637600 2793 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:52:59.638058 kubelet[2793]: I0514 23:52:59.637615 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:52:59.638058 kubelet[2793]: I0514 23:52:59.637941 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:52:59.640680 kubelet[2793]: E0514 23:52:59.640183 2793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:52:59.654889 kubelet[2793]: I0514 23:52:59.654863 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.655547 kubelet[2793]: I0514 23:52:59.655494 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.657353 kubelet[2793]: I0514 23:52:59.656609 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.742180 kubelet[2793]: I0514 23:52:59.742043 2793 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.749880 kubelet[2793]: I0514 23:52:59.749685 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.749880 kubelet[2793]: I0514 23:52:59.749765 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.749880 kubelet[2793]: I0514 23:52:59.749803 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.749880 kubelet[2793]: I0514 23:52:59.749834 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.749880 kubelet[2793]: I0514 23:52:59.749867 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1530ec26300539c2bad8dd7e11c7a72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" (UID: \"c1530ec26300539c2bad8dd7e11c7a72\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.750397 kubelet[2793]: I0514 23:52:59.749899 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7af6849a88c275f94326fe8de7f99155-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-308caa3ab6\" (UID: \"7af6849a88c275f94326fe8de7f99155\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.750397 kubelet[2793]: I0514 23:52:59.749927 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.750397 kubelet[2793]: I0514 23:52:59.750049 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.750397 kubelet[2793]: I0514 23:52:59.750086 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9474fde7866eaeef600502affd6952d-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" (UID: \"a9474fde7866eaeef600502affd6952d\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.756965 kubelet[2793]: I0514 23:52:59.756655 2793 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.756965 kubelet[2793]: I0514 23:52:59.756751 2793 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-n-308caa3ab6" May 14 23:52:59.892186 sudo[2825]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:52:59.892899 sudo[2825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:53:00.346830 sudo[2825]: pam_unix(sudo:session): session closed for user root May 14 23:53:00.509920 kubelet[2793]: I0514 23:53:00.509635 2793 apiserver.go:52] "Watching apiserver" May 14 23:53:00.548541 kubelet[2793]: I0514 23:53:00.548427 2793 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:53:00.605150 kubelet[2793]: I0514 23:53:00.604759 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.607176 kubelet[2793]: I0514 23:53:00.606283 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.607176 kubelet[2793]: I0514 23:53:00.606558 2793 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.618965 kubelet[2793]: E0514 23:53:00.618878 2793 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-308caa3ab6\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.621364 kubelet[2793]: E0514 23:53:00.621339 2793 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-308caa3ab6\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.623542 kubelet[2793]: E0514 23:53:00.623260 2793 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-308caa3ab6\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" May 14 23:53:00.653748 kubelet[2793]: I0514 23:53:00.652101 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-n-308caa3ab6" podStartSLOduration=1.652079525 podStartE2EDuration="1.652079525s" podCreationTimestamp="2025-05-14 23:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:00.639782057 +0000 UTC m=+1.208144253" watchObservedRunningTime="2025-05-14 23:53:00.652079525 +0000 UTC m=+1.220441761" May 14 23:53:00.653748 kubelet[2793]: I0514 23:53:00.652268 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-n-308caa3ab6" podStartSLOduration=1.6522622409999999 podStartE2EDuration="1.652262241s" podCreationTimestamp="2025-05-14 23:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:00.65185425 +0000 UTC m=+1.220216446" watchObservedRunningTime="2025-05-14 23:53:00.652262241 +0000 UTC m=+1.220624437" May 14 23:53:02.510666 sudo[1882]: pam_unix(sudo:session): session closed for user root May 14 23:53:02.668869 sshd[1881]: Connection closed by 147.75.109.163 port 34092 May 14 23:53:02.669927 sshd-session[1879]: pam_unix(sshd:session): session closed for user core May 14 23:53:02.677388 systemd[1]: sshd@6-91.99.86.151:22-147.75.109.163:34092.service: Deactivated successfully. May 14 23:53:02.681111 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:53:02.681502 systemd[1]: session-7.scope: Consumed 7.761s CPU time, 262.8M memory peak. May 14 23:53:02.683106 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. May 14 23:53:02.684266 systemd-logind[1477]: Removed session 7. May 14 23:53:03.325515 kubelet[2793]: I0514 23:53:03.325299 2793 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:53:03.326094 containerd[1488]: time="2025-05-14T23:53:03.325934571Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:53:03.326719 kubelet[2793]: I0514 23:53:03.326629 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:53:04.201300 kubelet[2793]: I0514 23:53:04.201224 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-308caa3ab6" podStartSLOduration=5.201204117 podStartE2EDuration="5.201204117s" podCreationTimestamp="2025-05-14 23:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:00.66830994 +0000 UTC m=+1.236672176" watchObservedRunningTime="2025-05-14 23:53:04.201204117 +0000 UTC m=+4.769566273" May 14 23:53:04.211554 systemd[1]: Created slice kubepods-besteffort-podb896ec4f_b8d2_4966_b8ca_542ca2223f9b.slice - libcontainer container kubepods-besteffort-podb896ec4f_b8d2_4966_b8ca_542ca2223f9b.slice. May 14 23:53:04.219484 kubelet[2793]: I0514 23:53:04.219419 2793 status_manager.go:890] "Failed to get status for pod" podUID="b896ec4f-b8d2-4966-b8ca-542ca2223f9b" pod="kube-system/kube-proxy-5vhhn" err="pods \"kube-proxy-5vhhn\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" May 14 23:53:04.219829 kubelet[2793]: W0514 23:53:04.219785 2793 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:53:04.219891 kubelet[2793]: E0514 23:53:04.219826 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:53:04.220771 kubelet[2793]: W0514 23:53:04.220102 2793 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:53:04.220771 kubelet[2793]: E0514 23:53:04.220133 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:53:04.250918 systemd[1]: Created slice kubepods-burstable-pod74f1d0e7_f23c_4fdd_89a0_ad7e0db5258c.slice - libcontainer container kubepods-burstable-pod74f1d0e7_f23c_4fdd_89a0_ad7e0db5258c.slice. May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286440 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hostproc\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286499 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-xtables-lock\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286525 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-net\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286541 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cni-path\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286556 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b896ec4f-b8d2-4966-b8ca-542ca2223f9b-kube-proxy\") pod \"kube-proxy-5vhhn\" (UID: \"b896ec4f-b8d2-4966-b8ca-542ca2223f9b\") " pod="kube-system/kube-proxy-5vhhn" May 14 23:53:04.287652 kubelet[2793]: I0514 23:53:04.286571 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-lib-modules\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286587 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-clustermesh-secrets\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286604 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-run\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286621 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b896ec4f-b8d2-4966-b8ca-542ca2223f9b-xtables-lock\") pod \"kube-proxy-5vhhn\" (UID: \"b896ec4f-b8d2-4966-b8ca-542ca2223f9b\") " pod="kube-system/kube-proxy-5vhhn" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286637 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b896ec4f-b8d2-4966-b8ca-542ca2223f9b-lib-modules\") pod \"kube-proxy-5vhhn\" (UID: \"b896ec4f-b8d2-4966-b8ca-542ca2223f9b\") " pod="kube-system/kube-proxy-5vhhn" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286657 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqwz\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-kube-api-access-qrqwz\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.287939 kubelet[2793]: I0514 23:53:04.286675 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hubble-tls\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.288065 kubelet[2793]: I0514 23:53:04.286689 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-etc-cni-netd\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.288065 kubelet[2793]: I0514 23:53:04.286728 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kh5q\" (UniqueName: \"kubernetes.io/projected/b896ec4f-b8d2-4966-b8ca-542ca2223f9b-kube-api-access-6kh5q\") pod \"kube-proxy-5vhhn\" (UID: \"b896ec4f-b8d2-4966-b8ca-542ca2223f9b\") " pod="kube-system/kube-proxy-5vhhn" May 14 23:53:04.288065 kubelet[2793]: I0514 23:53:04.286744 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-bpf-maps\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.288065 kubelet[2793]: I0514 23:53:04.286767 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-cgroup\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.288065 kubelet[2793]: I0514 23:53:04.286784 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-kernel\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.288163 kubelet[2793]: I0514 23:53:04.286805 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-config-path\") pod \"cilium-cvdfd\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " pod="kube-system/cilium-cvdfd" May 14 23:53:04.369054 systemd[1]: Created slice kubepods-besteffort-pode8ef48f4_af73_476d_a0a1_f5ae2a271737.slice - libcontainer container kubepods-besteffort-pode8ef48f4_af73_476d_a0a1_f5ae2a271737.slice. May 14 23:53:04.389780 kubelet[2793]: I0514 23:53:04.387674 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8ef48f4-af73-476d-a0a1-f5ae2a271737-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pmbwm\" (UID: \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\") " pod="kube-system/cilium-operator-6c4d7847fc-pmbwm" May 14 23:53:04.389780 kubelet[2793]: I0514 23:53:04.387949 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69c9q\" (UniqueName: \"kubernetes.io/projected/e8ef48f4-af73-476d-a0a1-f5ae2a271737-kube-api-access-69c9q\") pod \"cilium-operator-6c4d7847fc-pmbwm\" (UID: \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\") " pod="kube-system/cilium-operator-6c4d7847fc-pmbwm" May 14 23:53:05.277500 containerd[1488]: time="2025-05-14T23:53:05.277262786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pmbwm,Uid:e8ef48f4-af73-476d-a0a1-f5ae2a271737,Namespace:kube-system,Attempt:0,}" May 14 23:53:05.304434 containerd[1488]: time="2025-05-14T23:53:05.304264893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:05.304694 containerd[1488]: time="2025-05-14T23:53:05.304413892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:05.304694 containerd[1488]: time="2025-05-14T23:53:05.304451931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.305625 containerd[1488]: time="2025-05-14T23:53:05.305539799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.323930 systemd[1]: Started cri-containerd-1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d.scope - libcontainer container 1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d. May 14 23:53:05.356693 containerd[1488]: time="2025-05-14T23:53:05.356428647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pmbwm,Uid:e8ef48f4-af73-476d-a0a1-f5ae2a271737,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\"" May 14 23:53:05.358946 containerd[1488]: time="2025-05-14T23:53:05.358907140Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:53:05.418185 containerd[1488]: time="2025-05-14T23:53:05.418123457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vhhn,Uid:b896ec4f-b8d2-4966-b8ca-542ca2223f9b,Namespace:kube-system,Attempt:0,}" May 14 23:53:05.448404 containerd[1488]: time="2025-05-14T23:53:05.448072052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:05.448404 containerd[1488]: time="2025-05-14T23:53:05.448147331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:05.448404 containerd[1488]: time="2025-05-14T23:53:05.448184211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.448404 containerd[1488]: time="2025-05-14T23:53:05.448274850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.457045 containerd[1488]: time="2025-05-14T23:53:05.456999595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvdfd,Uid:74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c,Namespace:kube-system,Attempt:0,}" May 14 23:53:05.475909 systemd[1]: Started cri-containerd-0d984f0e9c04c6fa4192da3141a5fd70c3a200870b3de2980224152287c91ea4.scope - libcontainer container 0d984f0e9c04c6fa4192da3141a5fd70c3a200870b3de2980224152287c91ea4. May 14 23:53:05.494008 containerd[1488]: time="2025-05-14T23:53:05.493858515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:05.494141 containerd[1488]: time="2025-05-14T23:53:05.494030233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:05.494141 containerd[1488]: time="2025-05-14T23:53:05.494046713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.495330 containerd[1488]: time="2025-05-14T23:53:05.494847504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:05.507293 containerd[1488]: time="2025-05-14T23:53:05.507222010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vhhn,Uid:b896ec4f-b8d2-4966-b8ca-542ca2223f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d984f0e9c04c6fa4192da3141a5fd70c3a200870b3de2980224152287c91ea4\"" May 14 23:53:05.514404 containerd[1488]: time="2025-05-14T23:53:05.514101575Z" level=info msg="CreateContainer within sandbox \"0d984f0e9c04c6fa4192da3141a5fd70c3a200870b3de2980224152287c91ea4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:53:05.528021 systemd[1]: Started cri-containerd-82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836.scope - libcontainer container 82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836. May 14 23:53:05.545924 containerd[1488]: time="2025-05-14T23:53:05.545690472Z" level=info msg="CreateContainer within sandbox \"0d984f0e9c04c6fa4192da3141a5fd70c3a200870b3de2980224152287c91ea4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4991aef1917bca9347bf83ae57943a2e99b65dabc8a3c8add8c02372a4f56b86\"" May 14 23:53:05.547024 containerd[1488]: time="2025-05-14T23:53:05.546662462Z" level=info msg="StartContainer for \"4991aef1917bca9347bf83ae57943a2e99b65dabc8a3c8add8c02372a4f56b86\"" May 14 23:53:05.558869 containerd[1488]: time="2025-05-14T23:53:05.558821050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvdfd,Uid:74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\"" May 14 23:53:05.584999 systemd[1]: Started cri-containerd-4991aef1917bca9347bf83ae57943a2e99b65dabc8a3c8add8c02372a4f56b86.scope - libcontainer container 4991aef1917bca9347bf83ae57943a2e99b65dabc8a3c8add8c02372a4f56b86. May 14 23:53:05.625678 containerd[1488]: time="2025-05-14T23:53:05.625316968Z" level=info msg="StartContainer for \"4991aef1917bca9347bf83ae57943a2e99b65dabc8a3c8add8c02372a4f56b86\" returns successfully" May 14 23:53:05.649847 kubelet[2793]: I0514 23:53:05.649759 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5vhhn" podStartSLOduration=1.649738623 podStartE2EDuration="1.649738623s" podCreationTimestamp="2025-05-14 23:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:05.649619304 +0000 UTC m=+6.217981500" watchObservedRunningTime="2025-05-14 23:53:05.649738623 +0000 UTC m=+6.218100819" May 14 23:53:06.921608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209260240.mount: Deactivated successfully. May 14 23:53:08.626841 containerd[1488]: time="2025-05-14T23:53:08.625935053Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:08.626841 containerd[1488]: time="2025-05-14T23:53:08.626784009Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:53:08.627620 containerd[1488]: time="2025-05-14T23:53:08.627552966Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:08.629691 containerd[1488]: time="2025-05-14T23:53:08.628955200Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.269983301s" May 14 23:53:08.629691 containerd[1488]: time="2025-05-14T23:53:08.628993680Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:53:08.630801 containerd[1488]: time="2025-05-14T23:53:08.630771953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:53:08.632873 containerd[1488]: time="2025-05-14T23:53:08.632843185Z" level=info msg="CreateContainer within sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:53:08.657694 containerd[1488]: time="2025-05-14T23:53:08.657653684Z" level=info msg="CreateContainer within sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\"" May 14 23:53:08.658885 containerd[1488]: time="2025-05-14T23:53:08.658835879Z" level=info msg="StartContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\"" May 14 23:53:08.687551 systemd[1]: run-containerd-runc-k8s.io-9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2-runc.PKYMBn.mount: Deactivated successfully. May 14 23:53:08.695885 systemd[1]: Started cri-containerd-9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2.scope - libcontainer container 9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2. May 14 23:53:08.724523 containerd[1488]: time="2025-05-14T23:53:08.724461452Z" level=info msg="StartContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" returns successfully" May 14 23:53:09.673162 kubelet[2793]: I0514 23:53:09.672924 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pmbwm" podStartSLOduration=2.400819059 podStartE2EDuration="5.672905748s" podCreationTimestamp="2025-05-14 23:53:04 +0000 UTC" firstStartedPulling="2025-05-14 23:53:05.358437145 +0000 UTC m=+5.926799341" lastFinishedPulling="2025-05-14 23:53:08.630523834 +0000 UTC m=+9.198886030" observedRunningTime="2025-05-14 23:53:09.671471151 +0000 UTC m=+10.239833347" watchObservedRunningTime="2025-05-14 23:53:09.672905748 +0000 UTC m=+10.241267944" May 14 23:53:14.073025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561460598.mount: Deactivated successfully. May 14 23:53:15.588541 containerd[1488]: time="2025-05-14T23:53:15.586903444Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:15.588541 containerd[1488]: time="2025-05-14T23:53:15.588465699Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:53:15.589263 containerd[1488]: time="2025-05-14T23:53:15.589219986Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:15.590602 containerd[1488]: time="2025-05-14T23:53:15.590546958Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.959014128s" May 14 23:53:15.590602 containerd[1488]: time="2025-05-14T23:53:15.590593719Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:53:15.596351 containerd[1488]: time="2025-05-14T23:53:15.596312693Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:53:15.614849 containerd[1488]: time="2025-05-14T23:53:15.614804108Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\"" May 14 23:53:15.616534 containerd[1488]: time="2025-05-14T23:53:15.616489964Z" level=info msg="StartContainer for \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\"" May 14 23:53:15.648920 systemd[1]: Started cri-containerd-27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117.scope - libcontainer container 27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117. May 14 23:53:15.688524 containerd[1488]: time="2025-05-14T23:53:15.688451727Z" level=info msg="StartContainer for \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\" returns successfully" May 14 23:53:15.703938 systemd[1]: cri-containerd-27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117.scope: Deactivated successfully. May 14 23:53:15.866133 containerd[1488]: time="2025-05-14T23:53:15.865917171Z" level=info msg="shim disconnected" id=27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117 namespace=k8s.io May 14 23:53:15.866133 containerd[1488]: time="2025-05-14T23:53:15.866002051Z" level=warning msg="cleaning up after shim disconnected" id=27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117 namespace=k8s.io May 14 23:53:15.866133 containerd[1488]: time="2025-05-14T23:53:15.866016452Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:16.608669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117-rootfs.mount: Deactivated successfully. May 14 23:53:16.687487 containerd[1488]: time="2025-05-14T23:53:16.687106367Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:53:16.710736 containerd[1488]: time="2025-05-14T23:53:16.709021453Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\"" May 14 23:53:16.710736 containerd[1488]: time="2025-05-14T23:53:16.709982223Z" level=info msg="StartContainer for \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\"" May 14 23:53:16.763353 systemd[1]: Started cri-containerd-0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473.scope - libcontainer container 0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473. May 14 23:53:16.817618 containerd[1488]: time="2025-05-14T23:53:16.817574227Z" level=info msg="StartContainer for \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\" returns successfully" May 14 23:53:16.832050 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:53:16.832921 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:16.833401 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:16.839063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:16.839262 systemd[1]: cri-containerd-0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473.scope: Deactivated successfully. May 14 23:53:16.862506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:16.874774 containerd[1488]: time="2025-05-14T23:53:16.874538504Z" level=info msg="shim disconnected" id=0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473 namespace=k8s.io May 14 23:53:16.874774 containerd[1488]: time="2025-05-14T23:53:16.874604705Z" level=warning msg="cleaning up after shim disconnected" id=0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473 namespace=k8s.io May 14 23:53:16.874774 containerd[1488]: time="2025-05-14T23:53:16.874612785Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:17.608751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473-rootfs.mount: Deactivated successfully. May 14 23:53:17.694067 containerd[1488]: time="2025-05-14T23:53:17.693540114Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:53:17.715542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083341624.mount: Deactivated successfully. May 14 23:53:17.722690 containerd[1488]: time="2025-05-14T23:53:17.721780276Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\"" May 14 23:53:17.725741 containerd[1488]: time="2025-05-14T23:53:17.725628126Z" level=info msg="StartContainer for \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\"" May 14 23:53:17.766934 systemd[1]: Started cri-containerd-91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80.scope - libcontainer container 91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80. May 14 23:53:17.799831 containerd[1488]: time="2025-05-14T23:53:17.799780238Z" level=info msg="StartContainer for \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\" returns successfully" May 14 23:53:17.803561 systemd[1]: cri-containerd-91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80.scope: Deactivated successfully. May 14 23:53:17.834651 containerd[1488]: time="2025-05-14T23:53:17.834548404Z" level=info msg="shim disconnected" id=91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80 namespace=k8s.io May 14 23:53:17.834889 containerd[1488]: time="2025-05-14T23:53:17.834651685Z" level=warning msg="cleaning up after shim disconnected" id=91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80 namespace=k8s.io May 14 23:53:17.834889 containerd[1488]: time="2025-05-14T23:53:17.834686206Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:17.850190 containerd[1488]: time="2025-05-14T23:53:17.850139204Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:53:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:53:18.609604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80-rootfs.mount: Deactivated successfully. May 14 23:53:18.703876 containerd[1488]: time="2025-05-14T23:53:18.703370308Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:53:18.725768 containerd[1488]: time="2025-05-14T23:53:18.725726511Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\"" May 14 23:53:18.727043 containerd[1488]: time="2025-05-14T23:53:18.726993049Z" level=info msg="StartContainer for \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\"" May 14 23:53:18.761130 systemd[1]: Started cri-containerd-c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc.scope - libcontainer container c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc. May 14 23:53:18.792172 systemd[1]: cri-containerd-c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc.scope: Deactivated successfully. May 14 23:53:18.800288 containerd[1488]: time="2025-05-14T23:53:18.796687335Z" level=info msg="StartContainer for \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\" returns successfully" May 14 23:53:18.827204 containerd[1488]: time="2025-05-14T23:53:18.827125374Z" level=info msg="shim disconnected" id=c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc namespace=k8s.io May 14 23:53:18.827204 containerd[1488]: time="2025-05-14T23:53:18.827196175Z" level=warning msg="cleaning up after shim disconnected" id=c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc namespace=k8s.io May 14 23:53:18.827204 containerd[1488]: time="2025-05-14T23:53:18.827204855Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:53:19.609658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc-rootfs.mount: Deactivated successfully. May 14 23:53:19.711468 containerd[1488]: time="2025-05-14T23:53:19.711393474Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:53:19.741193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832205447.mount: Deactivated successfully. May 14 23:53:19.743207 containerd[1488]: time="2025-05-14T23:53:19.743021620Z" level=info msg="CreateContainer within sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\"" May 14 23:53:19.744286 containerd[1488]: time="2025-05-14T23:53:19.744126877Z" level=info msg="StartContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\"" May 14 23:53:19.776981 systemd[1]: Started cri-containerd-124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1.scope - libcontainer container 124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1. May 14 23:53:19.818737 containerd[1488]: time="2025-05-14T23:53:19.818441745Z" level=info msg="StartContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" returns successfully" May 14 23:53:19.959816 kubelet[2793]: I0514 23:53:19.959754 2793 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 23:53:20.002620 systemd[1]: Created slice kubepods-burstable-podf8dd6bfc_ab15_4fc3_a984_66c756639a0d.slice - libcontainer container kubepods-burstable-podf8dd6bfc_ab15_4fc3_a984_66c756639a0d.slice. May 14 23:53:20.017794 systemd[1]: Created slice kubepods-burstable-pod13d4ad9c_c59a_4af4_ad16_e394d3911eeb.slice - libcontainer container kubepods-burstable-pod13d4ad9c_c59a_4af4_ad16_e394d3911eeb.slice. May 14 23:53:20.105407 kubelet[2793]: I0514 23:53:20.105343 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13d4ad9c-c59a-4af4-ad16-e394d3911eeb-config-volume\") pod \"coredns-668d6bf9bc-sd2vn\" (UID: \"13d4ad9c-c59a-4af4-ad16-e394d3911eeb\") " pod="kube-system/coredns-668d6bf9bc-sd2vn" May 14 23:53:20.105567 kubelet[2793]: I0514 23:53:20.105428 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8dd6bfc-ab15-4fc3-a984-66c756639a0d-config-volume\") pod \"coredns-668d6bf9bc-gqf66\" (UID: \"f8dd6bfc-ab15-4fc3-a984-66c756639a0d\") " pod="kube-system/coredns-668d6bf9bc-gqf66" May 14 23:53:20.105567 kubelet[2793]: I0514 23:53:20.105469 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nhzg\" (UniqueName: \"kubernetes.io/projected/f8dd6bfc-ab15-4fc3-a984-66c756639a0d-kube-api-access-9nhzg\") pod \"coredns-668d6bf9bc-gqf66\" (UID: \"f8dd6bfc-ab15-4fc3-a984-66c756639a0d\") " pod="kube-system/coredns-668d6bf9bc-gqf66" May 14 23:53:20.105567 kubelet[2793]: I0514 23:53:20.105515 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb6pv\" (UniqueName: \"kubernetes.io/projected/13d4ad9c-c59a-4af4-ad16-e394d3911eeb-kube-api-access-hb6pv\") pod \"coredns-668d6bf9bc-sd2vn\" (UID: \"13d4ad9c-c59a-4af4-ad16-e394d3911eeb\") " pod="kube-system/coredns-668d6bf9bc-sd2vn" May 14 23:53:20.314850 containerd[1488]: time="2025-05-14T23:53:20.313901131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqf66,Uid:f8dd6bfc-ab15-4fc3-a984-66c756639a0d,Namespace:kube-system,Attempt:0,}" May 14 23:53:20.324009 containerd[1488]: time="2025-05-14T23:53:20.323961707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sd2vn,Uid:13d4ad9c-c59a-4af4-ad16-e394d3911eeb,Namespace:kube-system,Attempt:0,}" May 14 23:53:22.050948 systemd-networkd[1390]: cilium_host: Link UP May 14 23:53:22.053095 systemd-networkd[1390]: cilium_net: Link UP May 14 23:53:22.055948 systemd-networkd[1390]: cilium_net: Gained carrier May 14 23:53:22.056174 systemd-networkd[1390]: cilium_host: Gained carrier May 14 23:53:22.184134 systemd-networkd[1390]: cilium_vxlan: Link UP May 14 23:53:22.184146 systemd-networkd[1390]: cilium_vxlan: Gained carrier May 14 23:53:22.482731 kernel: NET: Registered PF_ALG protocol family May 14 23:53:22.498207 systemd-networkd[1390]: cilium_net: Gained IPv6LL May 14 23:53:23.002283 systemd-networkd[1390]: cilium_host: Gained IPv6LL May 14 23:53:23.226891 systemd-networkd[1390]: lxc_health: Link UP May 14 23:53:23.245887 systemd-networkd[1390]: lxc_health: Gained carrier May 14 23:53:23.321851 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL May 14 23:53:23.390120 systemd-networkd[1390]: lxc987428f9b55e: Link UP May 14 23:53:23.395768 kernel: eth0: renamed from tmpf845f May 14 23:53:23.403207 systemd-networkd[1390]: lxc679aabc734fd: Link UP May 14 23:53:23.418983 systemd-networkd[1390]: lxc987428f9b55e: Gained carrier May 14 23:53:23.420872 kernel: eth0: renamed from tmp356e6 May 14 23:53:23.429664 systemd-networkd[1390]: lxc679aabc734fd: Gained carrier May 14 23:53:23.489293 kubelet[2793]: I0514 23:53:23.489223 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cvdfd" podStartSLOduration=9.457408443 podStartE2EDuration="19.489202755s" podCreationTimestamp="2025-05-14 23:53:04 +0000 UTC" firstStartedPulling="2025-05-14 23:53:05.560936507 +0000 UTC m=+6.129298703" lastFinishedPulling="2025-05-14 23:53:15.592730779 +0000 UTC m=+16.161093015" observedRunningTime="2025-05-14 23:53:20.734784847 +0000 UTC m=+21.303147043" watchObservedRunningTime="2025-05-14 23:53:23.489202755 +0000 UTC m=+24.057564951" May 14 23:53:24.602078 systemd-networkd[1390]: lxc987428f9b55e: Gained IPv6LL May 14 23:53:24.858013 systemd-networkd[1390]: lxc_health: Gained IPv6LL May 14 23:53:25.115250 systemd-networkd[1390]: lxc679aabc734fd: Gained IPv6LL May 14 23:53:27.422778 containerd[1488]: time="2025-05-14T23:53:27.421626070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:27.422778 containerd[1488]: time="2025-05-14T23:53:27.421799554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:27.422778 containerd[1488]: time="2025-05-14T23:53:27.421817275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:27.422778 containerd[1488]: time="2025-05-14T23:53:27.422236726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:27.431240 containerd[1488]: time="2025-05-14T23:53:27.429626444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:53:27.431240 containerd[1488]: time="2025-05-14T23:53:27.429683525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:53:27.431240 containerd[1488]: time="2025-05-14T23:53:27.429715366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:27.431240 containerd[1488]: time="2025-05-14T23:53:27.429827849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:53:27.473922 systemd[1]: Started cri-containerd-356e615380b67308f59a457d440ad265aa78788f2769dd458df4d437f5d39464.scope - libcontainer container 356e615380b67308f59a457d440ad265aa78788f2769dd458df4d437f5d39464. May 14 23:53:27.477216 systemd[1]: Started cri-containerd-f845f8a96056fa206d1cfd7372ac26c492bb3ef7be4a00d265ab075e553558c6.scope - libcontainer container f845f8a96056fa206d1cfd7372ac26c492bb3ef7be4a00d265ab075e553558c6. May 14 23:53:27.548571 containerd[1488]: time="2025-05-14T23:53:27.548204254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sd2vn,Uid:13d4ad9c-c59a-4af4-ad16-e394d3911eeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"356e615380b67308f59a457d440ad265aa78788f2769dd458df4d437f5d39464\"" May 14 23:53:27.555848 containerd[1488]: time="2025-05-14T23:53:27.555416646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqf66,Uid:f8dd6bfc-ab15-4fc3-a984-66c756639a0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f845f8a96056fa206d1cfd7372ac26c492bb3ef7be4a00d265ab075e553558c6\"" May 14 23:53:27.558346 containerd[1488]: time="2025-05-14T23:53:27.558289883Z" level=info msg="CreateContainer within sandbox \"356e615380b67308f59a457d440ad265aa78788f2769dd458df4d437f5d39464\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:53:27.564671 containerd[1488]: time="2025-05-14T23:53:27.564454848Z" level=info msg="CreateContainer within sandbox \"f845f8a96056fa206d1cfd7372ac26c492bb3ef7be4a00d265ab075e553558c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:53:27.587966 containerd[1488]: time="2025-05-14T23:53:27.587923275Z" level=info msg="CreateContainer within sandbox \"356e615380b67308f59a457d440ad265aa78788f2769dd458df4d437f5d39464\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e37bc0fd0f4992a3c45a84b2d0fec9672b97081de42d5a82e366f5c25d3d152a\"" May 14 23:53:27.590782 containerd[1488]: time="2025-05-14T23:53:27.590246058Z" level=info msg="StartContainer for \"e37bc0fd0f4992a3c45a84b2d0fec9672b97081de42d5a82e366f5c25d3d152a\"" May 14 23:53:27.594108 containerd[1488]: time="2025-05-14T23:53:27.593908155Z" level=info msg="CreateContainer within sandbox \"f845f8a96056fa206d1cfd7372ac26c492bb3ef7be4a00d265ab075e553558c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39f3fdbc029a3f0dcd814c3f56948baaef7efcd74c799b32511e0857b6937ad2\"" May 14 23:53:27.597807 containerd[1488]: time="2025-05-14T23:53:27.594834100Z" level=info msg="StartContainer for \"39f3fdbc029a3f0dcd814c3f56948baaef7efcd74c799b32511e0857b6937ad2\"" May 14 23:53:27.635277 systemd[1]: Started cri-containerd-e37bc0fd0f4992a3c45a84b2d0fec9672b97081de42d5a82e366f5c25d3d152a.scope - libcontainer container e37bc0fd0f4992a3c45a84b2d0fec9672b97081de42d5a82e366f5c25d3d152a. May 14 23:53:27.643999 systemd[1]: Started cri-containerd-39f3fdbc029a3f0dcd814c3f56948baaef7efcd74c799b32511e0857b6937ad2.scope - libcontainer container 39f3fdbc029a3f0dcd814c3f56948baaef7efcd74c799b32511e0857b6937ad2. May 14 23:53:27.689188 containerd[1488]: time="2025-05-14T23:53:27.689006458Z" level=info msg="StartContainer for \"39f3fdbc029a3f0dcd814c3f56948baaef7efcd74c799b32511e0857b6937ad2\" returns successfully" May 14 23:53:27.693826 containerd[1488]: time="2025-05-14T23:53:27.693672902Z" level=info msg="StartContainer for \"e37bc0fd0f4992a3c45a84b2d0fec9672b97081de42d5a82e366f5c25d3d152a\" returns successfully" May 14 23:53:27.802693 kubelet[2793]: I0514 23:53:27.802451 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gqf66" podStartSLOduration=23.802408169 podStartE2EDuration="23.802408169s" podCreationTimestamp="2025-05-14 23:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:27.801183136 +0000 UTC m=+28.369545332" watchObservedRunningTime="2025-05-14 23:53:27.802408169 +0000 UTC m=+28.370770365" May 14 23:53:27.804087 kubelet[2793]: I0514 23:53:27.803294 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sd2vn" podStartSLOduration=23.803276312 podStartE2EDuration="23.803276312s" podCreationTimestamp="2025-05-14 23:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:27.772448808 +0000 UTC m=+28.340811004" watchObservedRunningTime="2025-05-14 23:53:27.803276312 +0000 UTC m=+28.371638508" May 14 23:57:38.639144 systemd[1]: Started sshd@7-91.99.86.151:22-147.75.109.163:44220.service - OpenSSH per-connection server daemon (147.75.109.163:44220). May 14 23:57:39.622179 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 44220 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:39.624618 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:39.631087 systemd-logind[1477]: New session 8 of user core. May 14 23:57:39.636893 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:57:40.396858 sshd[4217]: Connection closed by 147.75.109.163 port 44220 May 14 23:57:40.396401 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 14 23:57:40.400853 systemd[1]: sshd@7-91.99.86.151:22-147.75.109.163:44220.service: Deactivated successfully. May 14 23:57:40.404152 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:57:40.405273 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. May 14 23:57:40.406411 systemd-logind[1477]: Removed session 8. May 14 23:57:45.574124 systemd[1]: Started sshd@8-91.99.86.151:22-147.75.109.163:44236.service - OpenSSH per-connection server daemon (147.75.109.163:44236). May 14 23:57:46.560688 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 44236 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:46.563394 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:46.569563 systemd-logind[1477]: New session 9 of user core. May 14 23:57:46.578042 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:57:47.321513 sshd[4232]: Connection closed by 147.75.109.163 port 44236 May 14 23:57:47.320330 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 14 23:57:47.325506 systemd[1]: sshd@8-91.99.86.151:22-147.75.109.163:44236.service: Deactivated successfully. May 14 23:57:47.329012 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:57:47.330836 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. May 14 23:57:47.331941 systemd-logind[1477]: Removed session 9. May 14 23:57:52.502252 systemd[1]: Started sshd@9-91.99.86.151:22-147.75.109.163:41272.service - OpenSSH per-connection server daemon (147.75.109.163:41272). May 14 23:57:53.501133 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 41272 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:53.502967 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:53.508689 systemd-logind[1477]: New session 10 of user core. May 14 23:57:53.519992 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:57:54.280137 sshd[4246]: Connection closed by 147.75.109.163 port 41272 May 14 23:57:54.281274 sshd-session[4244]: pam_unix(sshd:session): session closed for user core May 14 23:57:54.286832 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. May 14 23:57:54.288905 systemd[1]: sshd@9-91.99.86.151:22-147.75.109.163:41272.service: Deactivated successfully. May 14 23:57:54.293244 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:57:54.295506 systemd-logind[1477]: Removed session 10. May 14 23:57:54.462075 systemd[1]: Started sshd@10-91.99.86.151:22-147.75.109.163:41280.service - OpenSSH per-connection server daemon (147.75.109.163:41280). May 14 23:57:55.453484 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 41280 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:55.455648 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:55.462244 systemd-logind[1477]: New session 11 of user core. May 14 23:57:55.469943 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:57:56.258373 sshd[4261]: Connection closed by 147.75.109.163 port 41280 May 14 23:57:56.259095 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 14 23:57:56.267531 systemd[1]: sshd@10-91.99.86.151:22-147.75.109.163:41280.service: Deactivated successfully. May 14 23:57:56.271607 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:57:56.273545 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. May 14 23:57:56.275072 systemd-logind[1477]: Removed session 11. May 14 23:57:56.437068 systemd[1]: Started sshd@11-91.99.86.151:22-147.75.109.163:41296.service - OpenSSH per-connection server daemon (147.75.109.163:41296). May 14 23:57:57.455817 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 41296 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:57.457048 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:57.464121 systemd-logind[1477]: New session 12 of user core. May 14 23:57:57.472055 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:57:58.231807 sshd[4273]: Connection closed by 147.75.109.163 port 41296 May 14 23:57:58.232667 sshd-session[4271]: pam_unix(sshd:session): session closed for user core May 14 23:57:58.237944 systemd[1]: sshd@11-91.99.86.151:22-147.75.109.163:41296.service: Deactivated successfully. May 14 23:57:58.240846 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:57:58.242286 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. May 14 23:57:58.243778 systemd-logind[1477]: Removed session 12. May 14 23:58:00.852744 update_engine[1480]: I20250514 23:58:00.852578 1480 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 23:58:00.852744 update_engine[1480]: I20250514 23:58:00.852667 1480 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 23:58:00.853331 update_engine[1480]: I20250514 23:58:00.853207 1480 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 23:58:00.854464 update_engine[1480]: I20250514 23:58:00.854207 1480 omaha_request_params.cc:62] Current group set to beta May 14 23:58:00.854464 update_engine[1480]: I20250514 23:58:00.854347 1480 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 23:58:00.854464 update_engine[1480]: I20250514 23:58:00.854362 1480 update_attempter.cc:643] Scheduling an action processor start. May 14 23:58:00.854464 update_engine[1480]: I20250514 23:58:00.854406 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:58:00.854464 update_engine[1480]: I20250514 23:58:00.854466 1480 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 23:58:00.854695 update_engine[1480]: I20250514 23:58:00.854568 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:58:00.854695 update_engine[1480]: I20250514 23:58:00.854580 1480 omaha_request_action.cc:272] Request: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: May 14 23:58:00.854695 update_engine[1480]: I20250514 23:58:00.854590 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:58:00.855626 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 23:58:00.857167 update_engine[1480]: I20250514 23:58:00.857120 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:58:00.857626 update_engine[1480]: I20250514 23:58:00.857571 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:58:00.860464 update_engine[1480]: E20250514 23:58:00.860365 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:58:00.860627 update_engine[1480]: I20250514 23:58:00.860478 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 23:58:03.409193 systemd[1]: Started sshd@12-91.99.86.151:22-147.75.109.163:44114.service - OpenSSH per-connection server daemon (147.75.109.163:44114). May 14 23:58:04.392515 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 44114 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:04.394767 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:04.402402 systemd-logind[1477]: New session 13 of user core. May 14 23:58:04.408939 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:58:05.143543 sshd[4289]: Connection closed by 147.75.109.163 port 44114 May 14 23:58:05.144344 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 14 23:58:05.149041 systemd[1]: sshd@12-91.99.86.151:22-147.75.109.163:44114.service: Deactivated successfully. May 14 23:58:05.152434 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:58:05.154784 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. May 14 23:58:05.156385 systemd-logind[1477]: Removed session 13. May 14 23:58:05.324107 systemd[1]: Started sshd@13-91.99.86.151:22-147.75.109.163:44122.service - OpenSSH per-connection server daemon (147.75.109.163:44122). May 14 23:58:06.306102 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 44122 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:06.308604 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:06.315548 systemd-logind[1477]: New session 14 of user core. May 14 23:58:06.323987 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:58:07.102807 sshd[4305]: Connection closed by 147.75.109.163 port 44122 May 14 23:58:07.103485 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 14 23:58:07.108934 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. May 14 23:58:07.109504 systemd[1]: sshd@13-91.99.86.151:22-147.75.109.163:44122.service: Deactivated successfully. May 14 23:58:07.113409 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:58:07.117152 systemd-logind[1477]: Removed session 14. May 14 23:58:07.284255 systemd[1]: Started sshd@14-91.99.86.151:22-147.75.109.163:44132.service - OpenSSH per-connection server daemon (147.75.109.163:44132). May 14 23:58:08.274341 sshd[4314]: Accepted publickey for core from 147.75.109.163 port 44132 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:08.276747 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:08.284053 systemd-logind[1477]: New session 15 of user core. May 14 23:58:08.291953 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:58:09.964153 sshd[4316]: Connection closed by 147.75.109.163 port 44132 May 14 23:58:09.965037 sshd-session[4314]: pam_unix(sshd:session): session closed for user core May 14 23:58:09.970976 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. May 14 23:58:09.973342 systemd[1]: sshd@14-91.99.86.151:22-147.75.109.163:44132.service: Deactivated successfully. May 14 23:58:09.977270 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:58:09.979829 systemd-logind[1477]: Removed session 15. May 14 23:58:10.149466 systemd[1]: Started sshd@15-91.99.86.151:22-147.75.109.163:60804.service - OpenSSH per-connection server daemon (147.75.109.163:60804). May 14 23:58:10.850294 update_engine[1480]: I20250514 23:58:10.850131 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:58:10.850858 update_engine[1480]: I20250514 23:58:10.850561 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:58:10.851156 update_engine[1480]: I20250514 23:58:10.851080 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:58:10.851764 update_engine[1480]: E20250514 23:58:10.851672 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:58:10.851879 update_engine[1480]: I20250514 23:58:10.851774 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 23:58:11.163599 sshd[4333]: Accepted publickey for core from 147.75.109.163 port 60804 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:11.165637 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:11.173713 systemd-logind[1477]: New session 16 of user core. May 14 23:58:11.180950 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:58:12.061744 sshd[4335]: Connection closed by 147.75.109.163 port 60804 May 14 23:58:12.060872 sshd-session[4333]: pam_unix(sshd:session): session closed for user core May 14 23:58:12.066746 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. May 14 23:58:12.067582 systemd[1]: sshd@15-91.99.86.151:22-147.75.109.163:60804.service: Deactivated successfully. May 14 23:58:12.071643 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:58:12.073198 systemd-logind[1477]: Removed session 16. May 14 23:58:12.247856 systemd[1]: Started sshd@16-91.99.86.151:22-147.75.109.163:60812.service - OpenSSH per-connection server daemon (147.75.109.163:60812). May 14 23:58:13.256746 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 60812 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:13.259493 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:13.266950 systemd-logind[1477]: New session 17 of user core. May 14 23:58:13.280094 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:58:14.030843 sshd[4347]: Connection closed by 147.75.109.163 port 60812 May 14 23:58:14.030739 sshd-session[4345]: pam_unix(sshd:session): session closed for user core May 14 23:58:14.036191 systemd[1]: sshd@16-91.99.86.151:22-147.75.109.163:60812.service: Deactivated successfully. May 14 23:58:14.038406 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:58:14.039828 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. May 14 23:58:14.041029 systemd-logind[1477]: Removed session 17. May 14 23:58:19.213449 systemd[1]: Started sshd@17-91.99.86.151:22-147.75.109.163:46050.service - OpenSSH per-connection server daemon (147.75.109.163:46050). May 14 23:58:20.219774 sshd[4361]: Accepted publickey for core from 147.75.109.163 port 46050 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:20.221454 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:20.228196 systemd-logind[1477]: New session 18 of user core. May 14 23:58:20.237013 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:58:20.850763 update_engine[1480]: I20250514 23:58:20.850593 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:58:20.851286 update_engine[1480]: I20250514 23:58:20.850981 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:58:20.851339 update_engine[1480]: I20250514 23:58:20.851300 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:58:20.851948 update_engine[1480]: E20250514 23:58:20.851882 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:58:20.852056 update_engine[1480]: I20250514 23:58:20.852013 1480 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 23:58:20.991388 sshd[4363]: Connection closed by 147.75.109.163 port 46050 May 14 23:58:20.992630 sshd-session[4361]: pam_unix(sshd:session): session closed for user core May 14 23:58:20.997574 systemd[1]: sshd@17-91.99.86.151:22-147.75.109.163:46050.service: Deactivated successfully. May 14 23:58:21.000351 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:58:21.002172 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. May 14 23:58:21.003937 systemd-logind[1477]: Removed session 18. May 14 23:58:26.163512 systemd[1]: Started sshd@18-91.99.86.151:22-147.75.109.163:46054.service - OpenSSH per-connection server daemon (147.75.109.163:46054). May 14 23:58:27.158664 sshd[4375]: Accepted publickey for core from 147.75.109.163 port 46054 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:27.160881 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:27.166477 systemd-logind[1477]: New session 19 of user core. May 14 23:58:27.172922 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:58:27.916476 sshd[4377]: Connection closed by 147.75.109.163 port 46054 May 14 23:58:27.917573 sshd-session[4375]: pam_unix(sshd:session): session closed for user core May 14 23:58:27.922818 systemd[1]: sshd@18-91.99.86.151:22-147.75.109.163:46054.service: Deactivated successfully. May 14 23:58:27.925977 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:58:27.927408 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. May 14 23:58:27.929365 systemd-logind[1477]: Removed session 19. May 14 23:58:28.102043 systemd[1]: Started sshd@19-91.99.86.151:22-147.75.109.163:46058.service - OpenSSH per-connection server daemon (147.75.109.163:46058). May 14 23:58:29.109257 sshd[4389]: Accepted publickey for core from 147.75.109.163 port 46058 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:29.111216 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:29.117441 systemd-logind[1477]: New session 20 of user core. May 14 23:58:29.124041 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:58:30.851779 update_engine[1480]: I20250514 23:58:30.850733 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:58:30.851779 update_engine[1480]: I20250514 23:58:30.850961 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:58:30.851779 update_engine[1480]: I20250514 23:58:30.851220 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:58:30.854338 update_engine[1480]: E20250514 23:58:30.852813 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.852942 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.852962 1480 omaha_request_action.cc:617] Omaha request response: May 14 23:58:30.854338 update_engine[1480]: E20250514 23:58:30.853089 1480 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853152 1480 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853175 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853188 1480 update_attempter.cc:306] Processing Done. May 14 23:58:30.854338 update_engine[1480]: E20250514 23:58:30.853214 1480 update_attempter.cc:619] Update failed. May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853226 1480 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853238 1480 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853251 1480 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853384 1480 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853435 1480 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:58:30.854338 update_engine[1480]: I20250514 23:58:30.853451 1480 omaha_request_action.cc:272] Request: May 14 23:58:30.854338 update_engine[1480]: May 14 23:58:30.854338 update_engine[1480]: May 14 23:58:30.855328 update_engine[1480]: May 14 23:58:30.855328 update_engine[1480]: May 14 23:58:30.855328 update_engine[1480]: May 14 23:58:30.855328 update_engine[1480]: May 14 23:58:30.855328 update_engine[1480]: I20250514 23:58:30.853463 1480 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:58:30.855328 update_engine[1480]: I20250514 23:58:30.853829 1480 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:58:30.855328 update_engine[1480]: I20250514 23:58:30.854208 1480 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:58:30.856029 update_engine[1480]: E20250514 23:58:30.855971 1480 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:58:30.856829 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856492 1480 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856526 1480 omaha_request_action.cc:617] Omaha request response: May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856541 1480 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856554 1480 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856564 1480 update_attempter.cc:306] Processing Done. May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856580 1480 update_attempter.cc:310] Error event sent. May 14 23:58:30.857500 update_engine[1480]: I20250514 23:58:30.856600 1480 update_check_scheduler.cc:74] Next update check in 42m25s May 14 23:58:30.858326 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 23:58:31.825141 containerd[1488]: time="2025-05-14T23:58:31.825079247Z" level=info msg="StopContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" with timeout 30 (s)" May 14 23:58:31.827775 containerd[1488]: time="2025-05-14T23:58:31.827584761Z" level=info msg="Stop container \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" with signal terminated" May 14 23:58:31.841645 containerd[1488]: time="2025-05-14T23:58:31.841554796Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:58:31.847282 systemd[1]: cri-containerd-9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2.scope: Deactivated successfully. May 14 23:58:31.856007 containerd[1488]: time="2025-05-14T23:58:31.855678638Z" level=info msg="StopContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" with timeout 2 (s)" May 14 23:58:31.856314 containerd[1488]: time="2025-05-14T23:58:31.856135259Z" level=info msg="Stop container \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" with signal terminated" May 14 23:58:31.866618 systemd-networkd[1390]: lxc_health: Link DOWN May 14 23:58:31.866626 systemd-networkd[1390]: lxc_health: Lost carrier May 14 23:58:31.882616 systemd[1]: cri-containerd-124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1.scope: Deactivated successfully. May 14 23:58:31.882970 systemd[1]: cri-containerd-124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1.scope: Consumed 7.989s CPU time, 124.1M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:58:31.898977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2-rootfs.mount: Deactivated successfully. May 14 23:58:31.910927 containerd[1488]: time="2025-05-14T23:58:31.910677778Z" level=info msg="shim disconnected" id=9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2 namespace=k8s.io May 14 23:58:31.910927 containerd[1488]: time="2025-05-14T23:58:31.910757782Z" level=warning msg="cleaning up after shim disconnected" id=9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2 namespace=k8s.io May 14 23:58:31.910927 containerd[1488]: time="2025-05-14T23:58:31.910765582Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:31.918464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1-rootfs.mount: Deactivated successfully. May 14 23:58:31.925469 containerd[1488]: time="2025-05-14T23:58:31.925033391Z" level=info msg="shim disconnected" id=124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1 namespace=k8s.io May 14 23:58:31.925469 containerd[1488]: time="2025-05-14T23:58:31.925254161Z" level=warning msg="cleaning up after shim disconnected" id=124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1 namespace=k8s.io May 14 23:58:31.925469 containerd[1488]: time="2025-05-14T23:58:31.925267482Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:31.940292 containerd[1488]: time="2025-05-14T23:58:31.940214121Z" level=info msg="StopContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" returns successfully" May 14 23:58:31.944411 containerd[1488]: time="2025-05-14T23:58:31.943421827Z" level=info msg="StopPodSandbox for \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\"" May 14 23:58:31.944411 containerd[1488]: time="2025-05-14T23:58:31.944110538Z" level=info msg="Container to stop \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.946328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d-shm.mount: Deactivated successfully. May 14 23:58:31.956457 systemd[1]: cri-containerd-1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d.scope: Deactivated successfully. May 14 23:58:31.961643 containerd[1488]: time="2025-05-14T23:58:31.961576452Z" level=info msg="StopContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" returns successfully" May 14 23:58:31.962522 containerd[1488]: time="2025-05-14T23:58:31.962464173Z" level=info msg="StopPodSandbox for \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\"" May 14 23:58:31.962522 containerd[1488]: time="2025-05-14T23:58:31.962520455Z" level=info msg="Container to stop \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.964539 containerd[1488]: time="2025-05-14T23:58:31.962534016Z" level=info msg="Container to stop \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.964539 containerd[1488]: time="2025-05-14T23:58:31.962542416Z" level=info msg="Container to stop \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.964539 containerd[1488]: time="2025-05-14T23:58:31.962550616Z" level=info msg="Container to stop \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.964539 containerd[1488]: time="2025-05-14T23:58:31.962558777Z" level=info msg="Container to stop \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:58:31.964452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836-shm.mount: Deactivated successfully. May 14 23:58:31.971332 systemd[1]: cri-containerd-82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836.scope: Deactivated successfully. May 14 23:58:32.001108 containerd[1488]: time="2025-05-14T23:58:32.001035326Z" level=info msg="shim disconnected" id=82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836 namespace=k8s.io May 14 23:58:32.001349 containerd[1488]: time="2025-05-14T23:58:32.001097769Z" level=warning msg="cleaning up after shim disconnected" id=82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836 namespace=k8s.io May 14 23:58:32.001349 containerd[1488]: time="2025-05-14T23:58:32.001225055Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:32.001748 containerd[1488]: time="2025-05-14T23:58:32.001539189Z" level=info msg="shim disconnected" id=1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d namespace=k8s.io May 14 23:58:32.001748 containerd[1488]: time="2025-05-14T23:58:32.001589191Z" level=warning msg="cleaning up after shim disconnected" id=1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d namespace=k8s.io May 14 23:58:32.001748 containerd[1488]: time="2025-05-14T23:58:32.001596792Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:32.020739 containerd[1488]: time="2025-05-14T23:58:32.019879664Z" level=info msg="TearDown network for sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" successfully" May 14 23:58:32.020739 containerd[1488]: time="2025-05-14T23:58:32.019914945Z" level=info msg="StopPodSandbox for \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" returns successfully" May 14 23:58:32.022440 containerd[1488]: time="2025-05-14T23:58:32.021944558Z" level=info msg="TearDown network for sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" successfully" May 14 23:58:32.022794 containerd[1488]: time="2025-05-14T23:58:32.022769475Z" level=info msg="StopPodSandbox for \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" returns successfully" May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.131861 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-etc-cni-netd\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.131914 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-bpf-maps\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.131958 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-config-path\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.131984 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cni-path\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.132005 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-cgroup\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.134735 kubelet[2793]: I0514 23:58:32.132016 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.135451 kubelet[2793]: I0514 23:58:32.132055 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.135451 kubelet[2793]: I0514 23:58:32.132029 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-kernel\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135451 kubelet[2793]: I0514 23:58:32.132084 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.135451 kubelet[2793]: I0514 23:58:32.132104 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-xtables-lock\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135451 kubelet[2793]: I0514 23:58:32.132145 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrqwz\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-kube-api-access-qrqwz\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132205 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69c9q\" (UniqueName: \"kubernetes.io/projected/e8ef48f4-af73-476d-a0a1-f5ae2a271737-kube-api-access-69c9q\") pod \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\" (UID: \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132278 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hostproc\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132312 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-clustermesh-secrets\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132337 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-run\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132367 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-net\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135655 kubelet[2793]: I0514 23:58:32.132403 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8ef48f4-af73-476d-a0a1-f5ae2a271737-cilium-config-path\") pod \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\" (UID: \"e8ef48f4-af73-476d-a0a1-f5ae2a271737\") " May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.132574 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-lib-modules\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.132627 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hubble-tls\") pod \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\" (UID: \"74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c\") " May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.132697 2793 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-etc-cni-netd\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.132744 2793 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-bpf-maps\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.132763 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-kernel\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.135888 kubelet[2793]: I0514 23:58:32.133474 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cni-path" (OuterVolumeSpecName: "cni-path") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.136087 kubelet[2793]: I0514 23:58:32.133535 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.138445 kubelet[2793]: I0514 23:58:32.137858 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.139018 kubelet[2793]: I0514 23:58:32.138985 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.141515 kubelet[2793]: I0514 23:58:32.139674 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.141515 kubelet[2793]: I0514 23:58:32.140386 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hostproc" (OuterVolumeSpecName: "hostproc") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.141515 kubelet[2793]: I0514 23:58:32.140521 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:58:32.142491 kubelet[2793]: I0514 23:58:32.142458 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:58:32.145361 kubelet[2793]: I0514 23:58:32.145327 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-kube-api-access-qrqwz" (OuterVolumeSpecName: "kube-api-access-qrqwz") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "kube-api-access-qrqwz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:58:32.145639 kubelet[2793]: I0514 23:58:32.145597 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ef48f4-af73-476d-a0a1-f5ae2a271737-kube-api-access-69c9q" (OuterVolumeSpecName: "kube-api-access-69c9q") pod "e8ef48f4-af73-476d-a0a1-f5ae2a271737" (UID: "e8ef48f4-af73-476d-a0a1-f5ae2a271737"). InnerVolumeSpecName "kube-api-access-69c9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:58:32.146625 kubelet[2793]: I0514 23:58:32.146573 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ef48f4-af73-476d-a0a1-f5ae2a271737-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8ef48f4-af73-476d-a0a1-f5ae2a271737" (UID: "e8ef48f4-af73-476d-a0a1-f5ae2a271737"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:58:32.147249 kubelet[2793]: I0514 23:58:32.147217 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:58:32.147825 kubelet[2793]: I0514 23:58:32.147762 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" (UID: "74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 23:58:32.233412 kubelet[2793]: I0514 23:58:32.233355 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8ef48f4-af73-476d-a0a1-f5ae2a271737-cilium-config-path\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233641 2793 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-lib-modules\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233673 2793 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hubble-tls\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233695 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-config-path\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233765 2793 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cni-path\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233784 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-cgroup\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233808 2793 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-xtables-lock\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233825 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qrqwz\" (UniqueName: \"kubernetes.io/projected/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-kube-api-access-qrqwz\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.233945 kubelet[2793]: I0514 23:58:32.233842 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69c9q\" (UniqueName: \"kubernetes.io/projected/e8ef48f4-af73-476d-a0a1-f5ae2a271737-kube-api-access-69c9q\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.234479 kubelet[2793]: I0514 23:58:32.233859 2793 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-hostproc\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.234479 kubelet[2793]: I0514 23:58:32.233877 2793 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-clustermesh-secrets\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.234479 kubelet[2793]: I0514 23:58:32.233897 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-cilium-run\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.234479 kubelet[2793]: I0514 23:58:32.233915 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c-host-proc-sys-net\") on node \"ci-4230-1-1-n-308caa3ab6\" DevicePath \"\"" May 14 23:58:32.536943 kubelet[2793]: I0514 23:58:32.535261 2793 scope.go:117] "RemoveContainer" containerID="124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1" May 14 23:58:32.542085 containerd[1488]: time="2025-05-14T23:58:32.542016110Z" level=info msg="RemoveContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\"" May 14 23:58:32.549521 containerd[1488]: time="2025-05-14T23:58:32.549429047Z" level=info msg="RemoveContainer for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" returns successfully" May 14 23:58:32.553871 kubelet[2793]: I0514 23:58:32.553688 2793 scope.go:117] "RemoveContainer" containerID="c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc" May 14 23:58:32.556574 systemd[1]: Removed slice kubepods-besteffort-pode8ef48f4_af73_476d_a0a1_f5ae2a271737.slice - libcontainer container kubepods-besteffort-pode8ef48f4_af73_476d_a0a1_f5ae2a271737.slice. May 14 23:58:32.558799 containerd[1488]: time="2025-05-14T23:58:32.558659067Z" level=info msg="RemoveContainer for \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\"" May 14 23:58:32.558867 systemd[1]: Removed slice kubepods-burstable-pod74f1d0e7_f23c_4fdd_89a0_ad7e0db5258c.slice - libcontainer container kubepods-burstable-pod74f1d0e7_f23c_4fdd_89a0_ad7e0db5258c.slice. May 14 23:58:32.558982 systemd[1]: kubepods-burstable-pod74f1d0e7_f23c_4fdd_89a0_ad7e0db5258c.slice: Consumed 8.084s CPU time, 124.5M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:58:32.565745 containerd[1488]: time="2025-05-14T23:58:32.565493138Z" level=info msg="RemoveContainer for \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\" returns successfully" May 14 23:58:32.566333 kubelet[2793]: I0514 23:58:32.565929 2793 scope.go:117] "RemoveContainer" containerID="91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80" May 14 23:58:32.568248 containerd[1488]: time="2025-05-14T23:58:32.567925409Z" level=info msg="RemoveContainer for \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\"" May 14 23:58:32.572842 containerd[1488]: time="2025-05-14T23:58:32.572497697Z" level=info msg="RemoveContainer for \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\" returns successfully" May 14 23:58:32.572941 kubelet[2793]: I0514 23:58:32.572747 2793 scope.go:117] "RemoveContainer" containerID="0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473" May 14 23:58:32.574735 containerd[1488]: time="2025-05-14T23:58:32.574459347Z" level=info msg="RemoveContainer for \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\"" May 14 23:58:32.579640 containerd[1488]: time="2025-05-14T23:58:32.579574819Z" level=info msg="RemoveContainer for \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\" returns successfully" May 14 23:58:32.579993 kubelet[2793]: I0514 23:58:32.579877 2793 scope.go:117] "RemoveContainer" containerID="27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117" May 14 23:58:32.581125 containerd[1488]: time="2025-05-14T23:58:32.581059607Z" level=info msg="RemoveContainer for \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\"" May 14 23:58:32.584447 containerd[1488]: time="2025-05-14T23:58:32.584396119Z" level=info msg="RemoveContainer for \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\" returns successfully" May 14 23:58:32.586111 kubelet[2793]: I0514 23:58:32.585768 2793 scope.go:117] "RemoveContainer" containerID="124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1" May 14 23:58:32.586542 containerd[1488]: time="2025-05-14T23:58:32.586478014Z" level=error msg="ContainerStatus for \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\": not found" May 14 23:58:32.587268 kubelet[2793]: E0514 23:58:32.587234 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\": not found" containerID="124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1" May 14 23:58:32.587383 kubelet[2793]: I0514 23:58:32.587283 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1"} err="failed to get container status \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"124043c0d500c9d0fccedf1ca8037b6850f6bd9bef66b7e861209f40d8e8b7c1\": not found" May 14 23:58:32.587383 kubelet[2793]: I0514 23:58:32.587376 2793 scope.go:117] "RemoveContainer" containerID="c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc" May 14 23:58:32.587613 containerd[1488]: time="2025-05-14T23:58:32.587565263Z" level=error msg="ContainerStatus for \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\": not found" May 14 23:58:32.588579 containerd[1488]: time="2025-05-14T23:58:32.588104248Z" level=error msg="ContainerStatus for \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\": not found" May 14 23:58:32.588635 kubelet[2793]: E0514 23:58:32.587802 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\": not found" containerID="c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc" May 14 23:58:32.588635 kubelet[2793]: I0514 23:58:32.587836 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc"} err="failed to get container status \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6be5e5abf629a413ba4142fda975aa5e24e03be87dbb915c67c0ced982ccbfc\": not found" May 14 23:58:32.588635 kubelet[2793]: I0514 23:58:32.587860 2793 scope.go:117] "RemoveContainer" containerID="91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80" May 14 23:58:32.588635 kubelet[2793]: E0514 23:58:32.588372 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\": not found" containerID="91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80" May 14 23:58:32.588635 kubelet[2793]: I0514 23:58:32.588397 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80"} err="failed to get container status \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\": rpc error: code = NotFound desc = an error occurred when try to find container \"91ddc9e1ba89fa6ba3e7c5d3c5f40501a6510ba03a43895c9b120e7c428bed80\": not found" May 14 23:58:32.588635 kubelet[2793]: I0514 23:58:32.588433 2793 scope.go:117] "RemoveContainer" containerID="0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473" May 14 23:58:32.588822 containerd[1488]: time="2025-05-14T23:58:32.588593910Z" level=error msg="ContainerStatus for \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\": not found" May 14 23:58:32.588847 kubelet[2793]: E0514 23:58:32.588744 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\": not found" containerID="0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473" May 14 23:58:32.588847 kubelet[2793]: I0514 23:58:32.588764 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473"} err="failed to get container status \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b9b420c8810c67b92dcbf656019b4da1b1682d4ed053c003e9a43bac9122473\": not found" May 14 23:58:32.588847 kubelet[2793]: I0514 23:58:32.588778 2793 scope.go:117] "RemoveContainer" containerID="27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117" May 14 23:58:32.589030 containerd[1488]: time="2025-05-14T23:58:32.588996448Z" level=error msg="ContainerStatus for \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\": not found" May 14 23:58:32.589216 kubelet[2793]: E0514 23:58:32.589180 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\": not found" containerID="27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117" May 14 23:58:32.589367 kubelet[2793]: I0514 23:58:32.589218 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117"} err="failed to get container status \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\": rpc error: code = NotFound desc = an error occurred when try to find container \"27d2711ba4005e3394ecfc831f7283acc4fea228aa66f1c4e82003b96e75c117\": not found" May 14 23:58:32.589367 kubelet[2793]: I0514 23:58:32.589236 2793 scope.go:117] "RemoveContainer" containerID="9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2" May 14 23:58:32.590411 containerd[1488]: time="2025-05-14T23:58:32.590387352Z" level=info msg="RemoveContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\"" May 14 23:58:32.594456 containerd[1488]: time="2025-05-14T23:58:32.594419055Z" level=info msg="RemoveContainer for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" returns successfully" May 14 23:58:32.594662 kubelet[2793]: I0514 23:58:32.594639 2793 scope.go:117] "RemoveContainer" containerID="9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2" May 14 23:58:32.594947 containerd[1488]: time="2025-05-14T23:58:32.594907637Z" level=error msg="ContainerStatus for \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\": not found" May 14 23:58:32.595119 kubelet[2793]: E0514 23:58:32.595088 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\": not found" containerID="9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2" May 14 23:58:32.595167 kubelet[2793]: I0514 23:58:32.595120 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2"} err="failed to get container status \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eeec19e1ac9a5a13462dc1e50e91274507f3ca85e3261c2fc725ae795774de2\": not found" May 14 23:58:32.821804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836-rootfs.mount: Deactivated successfully. May 14 23:58:32.822318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d-rootfs.mount: Deactivated successfully. May 14 23:58:32.822605 systemd[1]: var-lib-kubelet-pods-e8ef48f4\x2daf73\x2d476d\x2da0a1\x2df5ae2a271737-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69c9q.mount: Deactivated successfully. May 14 23:58:32.822860 systemd[1]: var-lib-kubelet-pods-74f1d0e7\x2df23c\x2d4fdd\x2d89a0\x2dad7e0db5258c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrqwz.mount: Deactivated successfully. May 14 23:58:32.823095 systemd[1]: var-lib-kubelet-pods-74f1d0e7\x2df23c\x2d4fdd\x2d89a0\x2dad7e0db5258c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:58:32.823326 systemd[1]: var-lib-kubelet-pods-74f1d0e7\x2df23c\x2d4fdd\x2d89a0\x2dad7e0db5258c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:58:33.559183 kubelet[2793]: I0514 23:58:33.558597 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" path="/var/lib/kubelet/pods/74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c/volumes" May 14 23:58:33.559678 kubelet[2793]: I0514 23:58:33.559301 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ef48f4-af73-476d-a0a1-f5ae2a271737" path="/var/lib/kubelet/pods/e8ef48f4-af73-476d-a0a1-f5ae2a271737/volumes" May 14 23:58:33.906609 sshd[4391]: Connection closed by 147.75.109.163 port 46058 May 14 23:58:33.907366 sshd-session[4389]: pam_unix(sshd:session): session closed for user core May 14 23:58:33.911308 systemd[1]: sshd@19-91.99.86.151:22-147.75.109.163:46058.service: Deactivated successfully. May 14 23:58:33.914389 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:58:33.914901 systemd[1]: session-20.scope: Consumed 1.528s CPU time, 23.8M memory peak. May 14 23:58:33.916655 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. May 14 23:58:33.918597 systemd-logind[1477]: Removed session 20. May 14 23:58:34.081525 systemd[1]: Started sshd@20-91.99.86.151:22-147.75.109.163:35578.service - OpenSSH per-connection server daemon (147.75.109.163:35578). May 14 23:58:34.745192 kubelet[2793]: E0514 23:58:34.745070 2793 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:58:35.081066 sshd[4554]: Accepted publickey for core from 147.75.109.163 port 35578 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:35.083386 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:35.089325 systemd-logind[1477]: New session 21 of user core. May 14 23:58:35.100208 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:58:35.842658 kubelet[2793]: I0514 23:58:35.840902 2793 setters.go:602] "Node became not ready" node="ci-4230-1-1-n-308caa3ab6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:58:35Z","lastTransitionTime":"2025-05-14T23:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:58:36.783230 kubelet[2793]: I0514 23:58:36.783173 2793 memory_manager.go:355] "RemoveStaleState removing state" podUID="e8ef48f4-af73-476d-a0a1-f5ae2a271737" containerName="cilium-operator" May 14 23:58:36.783230 kubelet[2793]: I0514 23:58:36.783210 2793 memory_manager.go:355] "RemoveStaleState removing state" podUID="74f1d0e7-f23c-4fdd-89a0-ad7e0db5258c" containerName="cilium-agent" May 14 23:58:36.794850 systemd[1]: Created slice kubepods-burstable-podf0e13830_01ef_49a5_9aa9_e20ea6afe544.slice - libcontainer container kubepods-burstable-podf0e13830_01ef_49a5_9aa9_e20ea6afe544.slice. May 14 23:58:36.797629 kubelet[2793]: W0514 23:58:36.796922 2793 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:58:36.797629 kubelet[2793]: E0514 23:58:36.796978 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:58:36.797629 kubelet[2793]: W0514 23:58:36.797038 2793 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:58:36.797629 kubelet[2793]: E0514 23:58:36.797050 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:58:36.797629 kubelet[2793]: I0514 23:58:36.797091 2793 status_manager.go:890] "Failed to get status for pod" podUID="f0e13830-01ef-49a5-9aa9-e20ea6afe544" pod="kube-system/cilium-47t4p" err="pods \"cilium-47t4p\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" May 14 23:58:36.797912 kubelet[2793]: W0514 23:58:36.797137 2793 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:58:36.797912 kubelet[2793]: E0514 23:58:36.797148 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:58:36.797912 kubelet[2793]: W0514 23:58:36.797184 2793 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-1-1-n-308caa3ab6" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object May 14 23:58:36.797912 kubelet[2793]: E0514 23:58:36.797197 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-1-1-n-308caa3ab6\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-308caa3ab6' and this object" logger="UnhandledError" May 14 23:58:36.867088 kubelet[2793]: I0514 23:58:36.867038 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e13830-01ef-49a5-9aa9-e20ea6afe544-clustermesh-secrets\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868788 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djv4c\" (UniqueName: \"kubernetes.io/projected/f0e13830-01ef-49a5-9aa9-e20ea6afe544-kube-api-access-djv4c\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868836 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cni-path\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868860 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-lib-modules\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868875 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-cgroup\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868892 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-xtables-lock\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869173 kubelet[2793]: I0514 23:58:36.868907 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-ipsec-secrets\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.868925 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-host-proc-sys-net\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.868948 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-bpf-maps\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.868964 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-host-proc-sys-kernel\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.868985 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-run\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.869003 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-config-path\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869472 kubelet[2793]: I0514 23:58:36.869017 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e13830-01ef-49a5-9aa9-e20ea6afe544-hubble-tls\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869598 kubelet[2793]: I0514 23:58:36.869037 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-etc-cni-netd\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.869598 kubelet[2793]: I0514 23:58:36.869067 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e13830-01ef-49a5-9aa9-e20ea6afe544-hostproc\") pod \"cilium-47t4p\" (UID: \"f0e13830-01ef-49a5-9aa9-e20ea6afe544\") " pod="kube-system/cilium-47t4p" May 14 23:58:36.957800 sshd[4556]: Connection closed by 147.75.109.163 port 35578 May 14 23:58:36.958583 sshd-session[4554]: pam_unix(sshd:session): session closed for user core May 14 23:58:36.964484 systemd[1]: sshd@20-91.99.86.151:22-147.75.109.163:35578.service: Deactivated successfully. May 14 23:58:36.967418 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:58:36.967714 systemd[1]: session-21.scope: Consumed 1.067s CPU time, 23.6M memory peak. May 14 23:58:36.969260 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. May 14 23:58:36.971415 systemd-logind[1477]: Removed session 21. May 14 23:58:37.140127 systemd[1]: Started sshd@21-91.99.86.151:22-147.75.109.163:35586.service - OpenSSH per-connection server daemon (147.75.109.163:35586). May 14 23:58:37.970523 kubelet[2793]: E0514 23:58:37.970387 2793 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 14 23:58:37.970523 kubelet[2793]: E0514 23:58:37.970442 2793 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 14 23:58:37.970523 kubelet[2793]: E0514 23:58:37.970491 2793 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-47t4p: failed to sync secret cache: timed out waiting for the condition May 14 23:58:37.970523 kubelet[2793]: E0514 23:58:37.970541 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-config-path podName:f0e13830-01ef-49a5-9aa9-e20ea6afe544 nodeName:}" failed. No retries permitted until 2025-05-14 23:58:38.470498914 +0000 UTC m=+339.038861110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f0e13830-01ef-49a5-9aa9-e20ea6afe544-cilium-config-path") pod "cilium-47t4p" (UID: "f0e13830-01ef-49a5-9aa9-e20ea6afe544") : failed to sync configmap cache: timed out waiting for the condition May 14 23:58:37.972432 kubelet[2793]: E0514 23:58:37.970386 2793 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 14 23:58:37.972432 kubelet[2793]: E0514 23:58:37.970906 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0e13830-01ef-49a5-9aa9-e20ea6afe544-clustermesh-secrets podName:f0e13830-01ef-49a5-9aa9-e20ea6afe544 nodeName:}" failed. No retries permitted until 2025-05-14 23:58:38.470887931 +0000 UTC m=+339.039250127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f0e13830-01ef-49a5-9aa9-e20ea6afe544-clustermesh-secrets") pod "cilium-47t4p" (UID: "f0e13830-01ef-49a5-9aa9-e20ea6afe544") : failed to sync secret cache: timed out waiting for the condition May 14 23:58:37.972967 kubelet[2793]: E0514 23:58:37.972908 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0e13830-01ef-49a5-9aa9-e20ea6afe544-hubble-tls podName:f0e13830-01ef-49a5-9aa9-e20ea6afe544 nodeName:}" failed. No retries permitted until 2025-05-14 23:58:38.472797539 +0000 UTC m=+339.041159775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f0e13830-01ef-49a5-9aa9-e20ea6afe544-hubble-tls") pod "cilium-47t4p" (UID: "f0e13830-01ef-49a5-9aa9-e20ea6afe544") : failed to sync secret cache: timed out waiting for the condition May 14 23:58:38.147021 sshd[4569]: Accepted publickey for core from 147.75.109.163 port 35586 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:38.149101 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:38.155761 systemd-logind[1477]: New session 22 of user core. May 14 23:58:38.164027 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:58:38.554586 kubelet[2793]: E0514 23:58:38.554486 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sd2vn" podUID="13d4ad9c-c59a-4af4-ad16-e394d3911eeb" May 14 23:58:38.601347 containerd[1488]: time="2025-05-14T23:58:38.601255261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47t4p,Uid:f0e13830-01ef-49a5-9aa9-e20ea6afe544,Namespace:kube-system,Attempt:0,}" May 14 23:58:38.630523 containerd[1488]: time="2025-05-14T23:58:38.630365675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:58:38.630830 containerd[1488]: time="2025-05-14T23:58:38.630540043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:58:38.630830 containerd[1488]: time="2025-05-14T23:58:38.630569525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:38.630830 containerd[1488]: time="2025-05-14T23:58:38.630711771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:38.655118 systemd[1]: Started cri-containerd-8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2.scope - libcontainer container 8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2. May 14 23:58:38.687753 containerd[1488]: time="2025-05-14T23:58:38.687681982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47t4p,Uid:f0e13830-01ef-49a5-9aa9-e20ea6afe544,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\"" May 14 23:58:38.693529 containerd[1488]: time="2025-05-14T23:58:38.693426846Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:58:38.705733 containerd[1488]: time="2025-05-14T23:58:38.705606764Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb\"" May 14 23:58:38.705733 containerd[1488]: time="2025-05-14T23:58:38.706471724Z" level=info msg="StartContainer for \"9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb\"" May 14 23:58:38.739017 systemd[1]: Started cri-containerd-9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb.scope - libcontainer container 9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb. May 14 23:58:38.772630 containerd[1488]: time="2025-05-14T23:58:38.772370704Z" level=info msg="StartContainer for \"9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb\" returns successfully" May 14 23:58:38.784653 systemd[1]: cri-containerd-9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb.scope: Deactivated successfully. May 14 23:58:38.819675 containerd[1488]: time="2025-05-14T23:58:38.818898196Z" level=info msg="shim disconnected" id=9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb namespace=k8s.io May 14 23:58:38.819675 containerd[1488]: time="2025-05-14T23:58:38.818958799Z" level=warning msg="cleaning up after shim disconnected" id=9af274a424034a652dc1bb23af6861ae003fd3f251c2f773c5cf35821d4aaecb namespace=k8s.io May 14 23:58:38.819675 containerd[1488]: time="2025-05-14T23:58:38.818967880Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:38.832752 containerd[1488]: time="2025-05-14T23:58:38.831909073Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:58:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:58:38.839724 sshd[4572]: Connection closed by 147.75.109.163 port 35586 May 14 23:58:38.840645 sshd-session[4569]: pam_unix(sshd:session): session closed for user core May 14 23:58:38.846867 systemd[1]: sshd@21-91.99.86.151:22-147.75.109.163:35586.service: Deactivated successfully. May 14 23:58:38.850609 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:58:38.853088 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. May 14 23:58:38.854916 systemd-logind[1477]: Removed session 22. May 14 23:58:39.024315 systemd[1]: Started sshd@22-91.99.86.151:22-147.75.109.163:48034.service - OpenSSH per-connection server daemon (147.75.109.163:48034). May 14 23:58:39.580864 containerd[1488]: time="2025-05-14T23:58:39.579545848Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:58:39.607651 containerd[1488]: time="2025-05-14T23:58:39.605967820Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8\"" May 14 23:58:39.612742 containerd[1488]: time="2025-05-14T23:58:39.609038241Z" level=info msg="StartContainer for \"0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8\"" May 14 23:58:39.658559 systemd[1]: Started cri-containerd-0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8.scope - libcontainer container 0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8. May 14 23:58:39.692014 containerd[1488]: time="2025-05-14T23:58:39.691874762Z" level=info msg="StartContainer for \"0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8\" returns successfully" May 14 23:58:39.708643 systemd[1]: cri-containerd-0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8.scope: Deactivated successfully. May 14 23:58:39.738335 containerd[1488]: time="2025-05-14T23:58:39.738222248Z" level=info msg="shim disconnected" id=0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8 namespace=k8s.io May 14 23:58:39.738335 containerd[1488]: time="2025-05-14T23:58:39.738313252Z" level=warning msg="cleaning up after shim disconnected" id=0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8 namespace=k8s.io May 14 23:58:39.738335 containerd[1488]: time="2025-05-14T23:58:39.738338094Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:39.747121 kubelet[2793]: E0514 23:58:39.746773 2793 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:58:40.034258 sshd[4690]: Accepted publickey for core from 147.75.109.163 port 48034 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:58:40.036270 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:58:40.044240 systemd-logind[1477]: New session 23 of user core. May 14 23:58:40.048463 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:58:40.486076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fdb61ab60f6750e960caf8361983bea46bf6e8c595e9703d69a605f9b6d40f8-rootfs.mount: Deactivated successfully. May 14 23:58:40.555172 kubelet[2793]: E0514 23:58:40.555052 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sd2vn" podUID="13d4ad9c-c59a-4af4-ad16-e394d3911eeb" May 14 23:58:40.587807 containerd[1488]: time="2025-05-14T23:58:40.587755696Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:58:40.617204 containerd[1488]: time="2025-05-14T23:58:40.617143646Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d\"" May 14 23:58:40.621186 containerd[1488]: time="2025-05-14T23:58:40.617867759Z" level=info msg="StartContainer for \"ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d\"" May 14 23:58:40.674436 systemd[1]: Started cri-containerd-ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d.scope - libcontainer container ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d. May 14 23:58:40.729262 containerd[1488]: time="2025-05-14T23:58:40.729140910Z" level=info msg="StartContainer for \"ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d\" returns successfully" May 14 23:58:40.731958 systemd[1]: cri-containerd-ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d.scope: Deactivated successfully. May 14 23:58:40.762463 containerd[1488]: time="2025-05-14T23:58:40.762217789Z" level=info msg="shim disconnected" id=ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d namespace=k8s.io May 14 23:58:40.763178 containerd[1488]: time="2025-05-14T23:58:40.762758974Z" level=warning msg="cleaning up after shim disconnected" id=ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d namespace=k8s.io May 14 23:58:40.763178 containerd[1488]: time="2025-05-14T23:58:40.762888180Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:41.486860 systemd[1]: run-containerd-runc-k8s.io-ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d-runc.50TwSi.mount: Deactivated successfully. May 14 23:58:41.487037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea0fcec205c5e3a4752e503e7bdb088f3ad7713bddabbf5618eb4103bdd4890d-rootfs.mount: Deactivated successfully. May 14 23:58:41.592132 containerd[1488]: time="2025-05-14T23:58:41.592085375Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:58:41.618598 containerd[1488]: time="2025-05-14T23:58:41.618519270Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9\"" May 14 23:58:41.620635 containerd[1488]: time="2025-05-14T23:58:41.619111537Z" level=info msg="StartContainer for \"f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9\"" May 14 23:58:41.652944 systemd[1]: Started cri-containerd-f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9.scope - libcontainer container f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9. May 14 23:58:41.680849 systemd[1]: cri-containerd-f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9.scope: Deactivated successfully. May 14 23:58:41.683839 containerd[1488]: time="2025-05-14T23:58:41.683807592Z" level=info msg="StartContainer for \"f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9\" returns successfully" May 14 23:58:41.708279 containerd[1488]: time="2025-05-14T23:58:41.708134551Z" level=info msg="shim disconnected" id=f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9 namespace=k8s.io May 14 23:58:41.708279 containerd[1488]: time="2025-05-14T23:58:41.708205274Z" level=warning msg="cleaning up after shim disconnected" id=f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9 namespace=k8s.io May 14 23:58:41.708279 containerd[1488]: time="2025-05-14T23:58:41.708215794Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:42.486877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f666baf4f523c9ba6bd3b97f5527d4c03bba96d1432c231823a7bcbe36eceaa9-rootfs.mount: Deactivated successfully. May 14 23:58:42.554851 kubelet[2793]: E0514 23:58:42.554769 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sd2vn" podUID="13d4ad9c-c59a-4af4-ad16-e394d3911eeb" May 14 23:58:42.601992 containerd[1488]: time="2025-05-14T23:58:42.601818551Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:58:42.632026 containerd[1488]: time="2025-05-14T23:58:42.631971738Z" level=info msg="CreateContainer within sandbox \"8d12d02774b3598ace09eb42696487b20d8b889d548354e57ad947310d3edfd2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a\"" May 14 23:58:42.634216 containerd[1488]: time="2025-05-14T23:58:42.632531564Z" level=info msg="StartContainer for \"58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a\"" May 14 23:58:42.666918 systemd[1]: Started cri-containerd-58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a.scope - libcontainer container 58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a. May 14 23:58:42.696680 containerd[1488]: time="2025-05-14T23:58:42.695879200Z" level=info msg="StartContainer for \"58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a\" returns successfully" May 14 23:58:43.047739 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:58:44.555067 kubelet[2793]: E0514 23:58:44.554989 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-sd2vn" podUID="13d4ad9c-c59a-4af4-ad16-e394d3911eeb" May 14 23:58:46.187891 systemd-networkd[1390]: lxc_health: Link UP May 14 23:58:46.196092 systemd-networkd[1390]: lxc_health: Gained carrier May 14 23:58:46.630680 kubelet[2793]: I0514 23:58:46.629603 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-47t4p" podStartSLOduration=10.62958653 podStartE2EDuration="10.62958653s" podCreationTimestamp="2025-05-14 23:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:58:43.620245695 +0000 UTC m=+344.188607931" watchObservedRunningTime="2025-05-14 23:58:46.62958653 +0000 UTC m=+347.197948726" May 14 23:58:48.058000 systemd-networkd[1390]: lxc_health: Gained IPv6LL May 14 23:58:51.445848 systemd[1]: run-containerd-runc-k8s.io-58f8a7814744c742af90b11411b2b3457e8c600734040f4c65aea865ede7608a-runc.yPyosZ.mount: Deactivated successfully. May 14 23:58:51.669834 sshd[4752]: Connection closed by 147.75.109.163 port 48034 May 14 23:58:51.670735 sshd-session[4690]: pam_unix(sshd:session): session closed for user core May 14 23:58:51.676640 systemd[1]: sshd@22-91.99.86.151:22-147.75.109.163:48034.service: Deactivated successfully. May 14 23:58:51.680258 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:58:51.681223 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. May 14 23:58:51.682461 systemd-logind[1477]: Removed session 23. May 14 23:58:59.591628 containerd[1488]: time="2025-05-14T23:58:59.591525283Z" level=info msg="StopPodSandbox for \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\"" May 14 23:58:59.592107 containerd[1488]: time="2025-05-14T23:58:59.591651929Z" level=info msg="TearDown network for sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" successfully" May 14 23:58:59.592107 containerd[1488]: time="2025-05-14T23:58:59.591666609Z" level=info msg="StopPodSandbox for \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" returns successfully" May 14 23:58:59.593671 containerd[1488]: time="2025-05-14T23:58:59.593026753Z" level=info msg="RemovePodSandbox for \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\"" May 14 23:58:59.593671 containerd[1488]: time="2025-05-14T23:58:59.593073315Z" level=info msg="Forcibly stopping sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\"" May 14 23:58:59.593671 containerd[1488]: time="2025-05-14T23:58:59.593146439Z" level=info msg="TearDown network for sandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" successfully" May 14 23:58:59.597167 containerd[1488]: time="2025-05-14T23:58:59.597071982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:58:59.597167 containerd[1488]: time="2025-05-14T23:58:59.597147906Z" level=info msg="RemovePodSandbox \"1ad6ca1c0cf82bd0fcb85837c4e3cb2aa6d8ba69f2bec4200e43a3175864026d\" returns successfully" May 14 23:58:59.598174 containerd[1488]: time="2025-05-14T23:58:59.597852858Z" level=info msg="StopPodSandbox for \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\"" May 14 23:58:59.598174 containerd[1488]: time="2025-05-14T23:58:59.597963984Z" level=info msg="TearDown network for sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" successfully" May 14 23:58:59.598174 containerd[1488]: time="2025-05-14T23:58:59.597977784Z" level=info msg="StopPodSandbox for \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" returns successfully" May 14 23:58:59.599239 containerd[1488]: time="2025-05-14T23:58:59.598500609Z" level=info msg="RemovePodSandbox for \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\"" May 14 23:58:59.599239 containerd[1488]: time="2025-05-14T23:58:59.598532250Z" level=info msg="Forcibly stopping sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\"" May 14 23:58:59.599239 containerd[1488]: time="2025-05-14T23:58:59.598593733Z" level=info msg="TearDown network for sandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" successfully" May 14 23:58:59.602352 containerd[1488]: time="2025-05-14T23:58:59.602315787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:58:59.602656 containerd[1488]: time="2025-05-14T23:58:59.602612561Z" level=info msg="RemovePodSandbox \"82e6bba7ca3ea8abff028eba766a11b1a9cec2ce0f0469f260e9181c89f6c836\" returns successfully" May 14 23:59:06.606810 kubelet[2793]: E0514 23:59:06.605876 2793 controller.go:195] "Failed to update lease" err="Put \"https://91.99.86.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-308caa3ab6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 14 23:59:07.070615 kubelet[2793]: E0514 23:59:07.070562 2793 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44464->10.0.0.2:2379: read: connection timed out" May 14 23:59:07.816205 systemd[1]: cri-containerd-2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c.scope: Deactivated successfully. May 14 23:59:07.816681 systemd[1]: cri-containerd-2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c.scope: Consumed 6.178s CPU time, 56M memory peak. May 14 23:59:07.842471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c-rootfs.mount: Deactivated successfully. May 14 23:59:07.849591 containerd[1488]: time="2025-05-14T23:59:07.849501398Z" level=info msg="shim disconnected" id=2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c namespace=k8s.io May 14 23:59:07.849591 containerd[1488]: time="2025-05-14T23:59:07.849630073Z" level=warning msg="cleaning up after shim disconnected" id=2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c namespace=k8s.io May 14 23:59:07.849591 containerd[1488]: time="2025-05-14T23:59:07.849645512Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:59:08.669876 kubelet[2793]: I0514 23:59:08.669835 2793 scope.go:117] "RemoveContainer" containerID="2f89137b7aeeb036058cf1cfd3c59e7edcca7aa29c13a6d94d90b5771e430f5c" May 14 23:59:08.672223 containerd[1488]: time="2025-05-14T23:59:08.672082549Z" level=info msg="CreateContainer within sandbox \"379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 23:59:08.689791 containerd[1488]: time="2025-05-14T23:59:08.689663569Z" level=info msg="CreateContainer within sandbox \"379f0b27d2b73cf60af726fbad59cbeaafbf64fe97bc5e5f0336e9242fc084de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ecacb48a15287035133ca61627abf24899106259b95de1376142d6d5e2f5ccbc\"" May 14 23:59:08.690508 containerd[1488]: time="2025-05-14T23:59:08.690412623Z" level=info msg="StartContainer for \"ecacb48a15287035133ca61627abf24899106259b95de1376142d6d5e2f5ccbc\"" May 14 23:59:08.720920 systemd[1]: Started cri-containerd-ecacb48a15287035133ca61627abf24899106259b95de1376142d6d5e2f5ccbc.scope - libcontainer container ecacb48a15287035133ca61627abf24899106259b95de1376142d6d5e2f5ccbc. May 14 23:59:08.766528 containerd[1488]: time="2025-05-14T23:59:08.766468302Z" level=info msg="StartContainer for \"ecacb48a15287035133ca61627abf24899106259b95de1376142d6d5e2f5ccbc\" returns successfully" May 14 23:59:11.484975 kubelet[2793]: E0514 23:59:11.484695 2793 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44318->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-1-n-308caa3ab6.183f8a35163861f3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-1-n-308caa3ab6,UID:a9474fde7866eaeef600502affd6952d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-308caa3ab6,},FirstTimestamp:2025-05-14 23:59:01.053735411 +0000 UTC m=+361.622097647,LastTimestamp:2025-05-14 23:59:01.053735411 +0000 UTC m=+361.622097647,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-308caa3ab6,}" May 14 23:59:12.771064 systemd[1]: cri-containerd-a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5.scope: Deactivated successfully. May 14 23:59:12.773511 systemd[1]: cri-containerd-a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5.scope: Consumed 6.134s CPU time, 23M memory peak. May 14 23:59:12.797380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5-rootfs.mount: Deactivated successfully. May 14 23:59:12.803904 containerd[1488]: time="2025-05-14T23:59:12.803840226Z" level=info msg="shim disconnected" id=a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5 namespace=k8s.io May 14 23:59:12.804594 containerd[1488]: time="2025-05-14T23:59:12.804374408Z" level=warning msg="cleaning up after shim disconnected" id=a12e582e7e29a84d6a99de4f4cc32d5ce1001499834076c6bab792fea179fcc5 namespace=k8s.io May 14 23:59:12.804594 containerd[1488]: time="2025-05-14T23:59:12.804402407Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:59:12.912222 systemd[1]: Started sshd@23-91.99.86.151:22-80.94.95.115:30876.service - OpenSSH per-connection server daemon (80.94.95.115:30876).