May 14 23:55:20.891485 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:55:20.891509 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:55:20.891519 kernel: KASLR enabled May 14 23:55:20.891525 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 14 23:55:20.891531 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 May 14 23:55:20.891536 kernel: random: crng init done May 14 23:55:20.891551 kernel: secureboot: Secure boot disabled May 14 23:55:20.891558 kernel: ACPI: Early table checksum verification disabled May 14 23:55:20.891564 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 14 23:55:20.891572 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 14 23:55:20.891578 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891584 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891590 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891596 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891606 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891614 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891621 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891631 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891639 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:55:20.891646 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 14 23:55:20.891652 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 14 23:55:20.891659 kernel: NUMA: Failed to initialise from firmware May 14 23:55:20.891665 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 14 23:55:20.891671 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 14 23:55:20.891677 kernel: Zone ranges: May 14 23:55:20.891685 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 14 23:55:20.891693 kernel: DMA32 empty May 14 23:55:20.891699 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 14 23:55:20.891706 kernel: Movable zone start for each node May 14 23:55:20.891712 kernel: Early memory node ranges May 14 23:55:20.891718 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] May 14 23:55:20.891725 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] May 14 23:55:20.891731 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] May 14 23:55:20.891737 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 14 23:55:20.891746 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 14 23:55:20.891752 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 14 23:55:20.891759 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 14 23:55:20.891768 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 14 23:55:20.891775 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 14 23:55:20.891784 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 14 23:55:20.891793 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 14 23:55:20.891800 kernel: psci: probing for conduit method from ACPI. May 14 23:55:20.891806 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:55:20.891814 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:55:20.891820 kernel: psci: Trusted OS migration not required May 14 23:55:20.891827 kernel: psci: SMC Calling Convention v1.1 May 14 23:55:20.891833 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 23:55:20.891840 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:55:20.891846 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:55:20.891853 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:55:20.891859 kernel: Detected PIPT I-cache on CPU0 May 14 23:55:20.891866 kernel: CPU features: detected: GIC system register CPU interface May 14 23:55:20.891880 kernel: CPU features: detected: Hardware dirty bit management May 14 23:55:20.891890 kernel: CPU features: detected: Spectre-v4 May 14 23:55:20.891897 kernel: CPU features: detected: Spectre-BHB May 14 23:55:20.891903 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:55:20.891910 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:55:20.891916 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:55:20.891923 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:55:20.891929 kernel: alternatives: applying boot alternatives May 14 23:55:20.891937 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:55:20.891943 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:55:20.891950 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:55:20.891956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:55:20.891964 kernel: Fallback order for Node 0: 0 May 14 23:55:20.891971 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 14 23:55:20.891977 kernel: Policy zone: Normal May 14 23:55:20.891984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:55:20.891990 kernel: software IO TLB: area num 2. May 14 23:55:20.891996 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 14 23:55:20.892003 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) May 14 23:55:20.892010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:55:20.892016 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:55:20.892023 kernel: rcu: RCU event tracing is enabled. May 14 23:55:20.892030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:55:20.892037 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:55:20.892045 kernel: Tracing variant of Tasks RCU enabled. May 14 23:55:20.892051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:55:20.892058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:55:20.892064 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:55:20.892071 kernel: GICv3: 256 SPIs implemented May 14 23:55:20.892077 kernel: GICv3: 0 Extended SPIs implemented May 14 23:55:20.892083 kernel: Root IRQ handler: gic_handle_irq May 14 23:55:20.892090 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:55:20.892096 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 23:55:20.892102 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 23:55:20.892109 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:55:20.892117 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 14 23:55:20.892123 kernel: GICv3: using LPI property table @0x00000001000e0000 May 14 23:55:20.892130 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 14 23:55:20.892136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:55:20.892143 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:55:20.892149 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:55:20.892156 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:55:20.892163 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:55:20.892169 kernel: Console: colour dummy device 80x25 May 14 23:55:20.892176 kernel: ACPI: Core revision 20230628 May 14 23:55:20.892183 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:55:20.892191 kernel: pid_max: default: 32768 minimum: 301 May 14 23:55:20.892198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:55:20.892204 kernel: landlock: Up and running. May 14 23:55:20.892211 kernel: SELinux: Initializing. May 14 23:55:20.892217 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:55:20.892224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:55:20.892231 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 14 23:55:20.892238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:55:20.892244 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:55:20.892252 kernel: rcu: Hierarchical SRCU implementation. May 14 23:55:20.892259 kernel: rcu: Max phase no-delay instances is 400. May 14 23:55:20.892266 kernel: Platform MSI: ITS@0x8080000 domain created May 14 23:55:20.892272 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 23:55:20.892279 kernel: Remapping and enabling EFI services. May 14 23:55:20.892285 kernel: smp: Bringing up secondary CPUs ... May 14 23:55:20.892292 kernel: Detected PIPT I-cache on CPU1 May 14 23:55:20.892298 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 23:55:20.892305 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 14 23:55:20.892313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:55:20.892320 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:55:20.892332 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:55:20.892340 kernel: SMP: Total of 2 processors activated. May 14 23:55:20.892347 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:55:20.892354 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:55:20.892361 kernel: CPU features: detected: Common not Private translations May 14 23:55:20.892368 kernel: CPU features: detected: CRC32 instructions May 14 23:55:20.892375 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 23:55:20.892382 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:55:20.892391 kernel: CPU features: detected: LSE atomic instructions May 14 23:55:20.892398 kernel: CPU features: detected: Privileged Access Never May 14 23:55:20.892413 kernel: CPU features: detected: RAS Extension Support May 14 23:55:20.892421 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 23:55:20.892428 kernel: CPU: All CPU(s) started at EL1 May 14 23:55:20.892435 kernel: alternatives: applying system-wide alternatives May 14 23:55:20.892441 kernel: devtmpfs: initialized May 14 23:55:20.892450 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:55:20.892458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:55:20.892464 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:55:20.892471 kernel: SMBIOS 3.0.0 present. May 14 23:55:20.892478 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 14 23:55:20.892485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:55:20.892492 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:55:20.892500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:55:20.892507 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:55:20.892515 kernel: audit: initializing netlink subsys (disabled) May 14 23:55:20.892522 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 May 14 23:55:20.892529 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:55:20.892537 kernel: cpuidle: using governor menu May 14 23:55:20.892544 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:55:20.892551 kernel: ASID allocator initialised with 32768 entries May 14 23:55:20.892558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:55:20.892565 kernel: Serial: AMBA PL011 UART driver May 14 23:55:20.892572 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:55:20.892580 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:55:20.892587 kernel: Modules: 509264 pages in range for PLT usage May 14 23:55:20.892594 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:55:20.892601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:55:20.892608 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:55:20.892615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:55:20.892622 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:55:20.892629 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:55:20.892636 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:55:20.892644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:55:20.892651 kernel: ACPI: Added _OSI(Module Device) May 14 23:55:20.892658 kernel: ACPI: Added _OSI(Processor Device) May 14 23:55:20.892664 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:55:20.892671 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:55:20.892678 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:55:20.892685 kernel: ACPI: Interpreter enabled May 14 23:55:20.892692 kernel: ACPI: Using GIC for interrupt routing May 14 23:55:20.892699 kernel: ACPI: MCFG table detected, 1 entries May 14 23:55:20.892707 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 23:55:20.892714 kernel: printk: console [ttyAMA0] enabled May 14 23:55:20.892721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:55:20.892888 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:55:20.892970 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:55:20.893035 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:55:20.893098 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 23:55:20.893163 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 23:55:20.893172 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 23:55:20.893179 kernel: PCI host bridge to bus 0000:00 May 14 23:55:20.893246 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 23:55:20.893304 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:55:20.893362 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 23:55:20.893432 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:55:20.893509 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 23:55:20.893588 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 14 23:55:20.893655 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 14 23:55:20.893720 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 14 23:55:20.893792 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.893867 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 14 23:55:20.893987 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894058 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 14 23:55:20.894129 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894195 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 14 23:55:20.894270 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894334 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 14 23:55:20.894477 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894562 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 14 23:55:20.894633 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894697 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 14 23:55:20.894767 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894830 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 14 23:55:20.894923 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.894997 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 14 23:55:20.895069 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 14 23:55:20.895133 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 14 23:55:20.895203 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 14 23:55:20.895272 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 14 23:55:20.895356 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 14 23:55:20.895529 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 14 23:55:20.895606 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:55:20.895672 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 14 23:55:20.895750 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 14 23:55:20.895818 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 14 23:55:20.895938 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 14 23:55:20.896016 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 14 23:55:20.896090 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 14 23:55:20.896163 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 14 23:55:20.896229 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 14 23:55:20.896301 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 14 23:55:20.896369 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 14 23:55:20.896452 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 14 23:55:20.896531 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 14 23:55:20.896598 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 14 23:55:20.896663 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 14 23:55:20.896736 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 14 23:55:20.896803 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 14 23:55:20.896870 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 14 23:55:20.896977 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 14 23:55:20.897067 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 23:55:20.897135 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 14 23:55:20.897199 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 14 23:55:20.897265 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 23:55:20.897329 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 23:55:20.897397 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 14 23:55:20.897483 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 23:55:20.897550 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 14 23:55:20.897615 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 23:55:20.897683 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 23:55:20.897750 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 14 23:55:20.897816 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 23:55:20.897894 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 14 23:55:20.897962 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 14 23:55:20.898031 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 14 23:55:20.898099 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 14 23:55:20.898163 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 14 23:55:20.898229 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 14 23:55:20.898297 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 23:55:20.898367 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 14 23:55:20.898503 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 14 23:55:20.898578 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 23:55:20.898641 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 14 23:55:20.898702 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 14 23:55:20.898768 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 23:55:20.898830 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 14 23:55:20.898939 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 14 23:55:20.899008 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 14 23:55:20.899073 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:55:20.899141 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 14 23:55:20.899203 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:55:20.899265 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 14 23:55:20.899330 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:55:20.899397 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 14 23:55:20.899528 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:55:20.899601 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 14 23:55:20.899666 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:55:20.899730 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 14 23:55:20.899795 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:55:20.899861 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 14 23:55:20.901281 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:55:20.901386 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 14 23:55:20.901540 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:55:20.901615 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 14 23:55:20.901682 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:55:20.901752 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 14 23:55:20.901819 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 14 23:55:20.901906 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 14 23:55:20.901979 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 14 23:55:20.902053 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 14 23:55:20.902119 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 14 23:55:20.902185 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 14 23:55:20.902250 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 14 23:55:20.902317 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 14 23:55:20.902383 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 14 23:55:20.902498 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 14 23:55:20.902565 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 14 23:55:20.902634 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 14 23:55:20.902697 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 14 23:55:20.902763 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 14 23:55:20.902826 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 14 23:55:20.902919 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 14 23:55:20.902989 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 14 23:55:20.903055 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 14 23:55:20.903120 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 14 23:55:20.903191 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 14 23:55:20.903269 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 14 23:55:20.903338 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:55:20.903451 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 14 23:55:20.903539 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 23:55:20.903604 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 14 23:55:20.903668 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 14 23:55:20.903732 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:55:20.903810 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 14 23:55:20.903911 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 23:55:20.903988 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 14 23:55:20.904053 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 14 23:55:20.904118 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:55:20.904195 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 14 23:55:20.904261 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 14 23:55:20.904324 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 23:55:20.904386 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 14 23:55:20.904524 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 14 23:55:20.904590 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:55:20.904662 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 14 23:55:20.904731 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 23:55:20.904792 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 14 23:55:20.904854 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 14 23:55:20.904940 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:55:20.905011 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 14 23:55:20.905077 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 14 23:55:20.905139 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 23:55:20.905202 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 14 23:55:20.905265 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 14 23:55:20.905334 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:55:20.905413 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 14 23:55:20.905500 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 14 23:55:20.905566 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 23:55:20.905631 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 14 23:55:20.905694 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 14 23:55:20.905757 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:55:20.905828 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 14 23:55:20.905911 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 14 23:55:20.905979 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 14 23:55:20.906043 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 23:55:20.906105 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 14 23:55:20.906169 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 14 23:55:20.906231 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:55:20.906295 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 23:55:20.906362 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 14 23:55:20.906463 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 14 23:55:20.906527 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:55:20.906591 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 23:55:20.906652 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 14 23:55:20.906713 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 14 23:55:20.906775 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:55:20.906849 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 23:55:20.906952 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:55:20.907015 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 23:55:20.907083 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 14 23:55:20.907143 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 14 23:55:20.907199 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 14 23:55:20.907265 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 14 23:55:20.907324 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 14 23:55:20.907395 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 14 23:55:20.907509 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 14 23:55:20.907572 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 14 23:55:20.907631 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 14 23:55:20.907698 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 14 23:55:20.907757 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 14 23:55:20.907821 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 14 23:55:20.907907 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 14 23:55:20.907976 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 14 23:55:20.908038 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 14 23:55:20.908108 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 14 23:55:20.908167 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 14 23:55:20.908226 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 14 23:55:20.908298 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 14 23:55:20.908360 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 14 23:55:20.908480 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 14 23:55:20.908551 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 14 23:55:20.908615 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 14 23:55:20.908676 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 14 23:55:20.908748 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 14 23:55:20.908805 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 14 23:55:20.908863 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 14 23:55:20.908901 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:55:20.908913 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:55:20.908924 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:55:20.908932 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:55:20.908939 kernel: iommu: Default domain type: Translated May 14 23:55:20.908947 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:55:20.908955 kernel: efivars: Registered efivars operations May 14 23:55:20.908962 kernel: vgaarb: loaded May 14 23:55:20.908970 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:55:20.908977 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:55:20.908985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:55:20.908994 kernel: pnp: PnP ACPI init May 14 23:55:20.909080 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 23:55:20.909092 kernel: pnp: PnP ACPI: found 1 devices May 14 23:55:20.909100 kernel: NET: Registered PF_INET protocol family May 14 23:55:20.909108 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:55:20.909116 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:55:20.909123 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:55:20.909131 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:55:20.909139 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:55:20.909188 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:55:20.909198 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:55:20.909206 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:55:20.909214 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:55:20.909300 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 14 23:55:20.909312 kernel: PCI: CLS 0 bytes, default 64 May 14 23:55:20.909320 kernel: kvm [1]: HYP mode not available May 14 23:55:20.909328 kernel: Initialise system trusted keyrings May 14 23:55:20.909335 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:55:20.909346 kernel: Key type asymmetric registered May 14 23:55:20.909353 kernel: Asymmetric key parser 'x509' registered May 14 23:55:20.909361 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:55:20.909368 kernel: io scheduler mq-deadline registered May 14 23:55:20.909376 kernel: io scheduler kyber registered May 14 23:55:20.909383 kernel: io scheduler bfq registered May 14 23:55:20.909391 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 14 23:55:20.909541 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 14 23:55:20.909616 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 14 23:55:20.909679 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.909747 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 14 23:55:20.909811 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 14 23:55:20.909882 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.909955 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 14 23:55:20.910021 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 14 23:55:20.910084 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.910147 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 14 23:55:20.910211 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 14 23:55:20.910273 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.910337 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 14 23:55:20.910439 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 14 23:55:20.910515 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.910580 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 14 23:55:20.910642 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 14 23:55:20.910703 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.910768 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 14 23:55:20.910835 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 14 23:55:20.910913 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.910983 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 14 23:55:20.911046 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 14 23:55:20.911112 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.911122 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 14 23:55:20.911184 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 14 23:55:20.911250 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 14 23:55:20.911312 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 14 23:55:20.911322 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:55:20.911330 kernel: ACPI: button: Power Button [PWRB] May 14 23:55:20.911337 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:55:20.911428 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 14 23:55:20.911508 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 14 23:55:20.911523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:55:20.911531 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 14 23:55:20.911597 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 14 23:55:20.911607 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 14 23:55:20.911615 kernel: thunder_xcv, ver 1.0 May 14 23:55:20.911622 kernel: thunder_bgx, ver 1.0 May 14 23:55:20.911630 kernel: nicpf, ver 1.0 May 14 23:55:20.911637 kernel: nicvf, ver 1.0 May 14 23:55:20.911712 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:55:20.911775 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:55:20 UTC (1747266920) May 14 23:55:20.911785 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:55:20.911793 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 23:55:20.911800 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:55:20.911807 kernel: watchdog: Hard watchdog permanently disabled May 14 23:55:20.911815 kernel: NET: Registered PF_INET6 protocol family May 14 23:55:20.911823 kernel: Segment Routing with IPv6 May 14 23:55:20.911830 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:55:20.911839 kernel: NET: Registered PF_PACKET protocol family May 14 23:55:20.911846 kernel: Key type dns_resolver registered May 14 23:55:20.911854 kernel: registered taskstats version 1 May 14 23:55:20.911861 kernel: Loading compiled-in X.509 certificates May 14 23:55:20.911869 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:55:20.911909 kernel: Key type .fscrypt registered May 14 23:55:20.911918 kernel: Key type fscrypt-provisioning registered May 14 23:55:20.911926 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:55:20.911933 kernel: ima: Allocated hash algorithm: sha1 May 14 23:55:20.911943 kernel: ima: No architecture policies found May 14 23:55:20.911951 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:55:20.911959 kernel: clk: Disabling unused clocks May 14 23:55:20.911966 kernel: Freeing unused kernel memory: 38336K May 14 23:55:20.911974 kernel: Run /init as init process May 14 23:55:20.911981 kernel: with arguments: May 14 23:55:20.911989 kernel: /init May 14 23:55:20.911996 kernel: with environment: May 14 23:55:20.912003 kernel: HOME=/ May 14 23:55:20.912012 kernel: TERM=linux May 14 23:55:20.912019 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:55:20.912028 systemd[1]: Successfully made /usr/ read-only. May 14 23:55:20.912039 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:55:20.912048 systemd[1]: Detected virtualization kvm. May 14 23:55:20.912056 systemd[1]: Detected architecture arm64. May 14 23:55:20.912063 systemd[1]: Running in initrd. May 14 23:55:20.912072 systemd[1]: No hostname configured, using default hostname. May 14 23:55:20.912080 systemd[1]: Hostname set to . May 14 23:55:20.912088 systemd[1]: Initializing machine ID from VM UUID. May 14 23:55:20.912096 systemd[1]: Queued start job for default target initrd.target. May 14 23:55:20.912104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:55:20.912112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:55:20.912121 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:55:20.912131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:55:20.912140 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:55:20.912149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:55:20.912158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:55:20.912166 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:55:20.912174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:55:20.912182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:55:20.912190 systemd[1]: Reached target paths.target - Path Units. May 14 23:55:20.912200 systemd[1]: Reached target slices.target - Slice Units. May 14 23:55:20.912208 systemd[1]: Reached target swap.target - Swaps. May 14 23:55:20.912216 systemd[1]: Reached target timers.target - Timer Units. May 14 23:55:20.912224 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:55:20.912232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:55:20.912241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:55:20.912249 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:55:20.912257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:55:20.912265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:55:20.912275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:55:20.912283 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:55:20.912290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:55:20.912298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:55:20.912306 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:55:20.912314 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:55:20.912322 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:55:20.912330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:55:20.912339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:55:20.912347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:55:20.912355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:55:20.912364 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:55:20.912449 systemd-journald[238]: Collecting audit messages is disabled. May 14 23:55:20.912479 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:55:20.912488 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:55:20.912496 kernel: Bridge firewalling registered May 14 23:55:20.912504 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:55:20.912514 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:55:20.912522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:55:20.912531 systemd-journald[238]: Journal started May 14 23:55:20.912550 systemd-journald[238]: Runtime Journal (/run/log/journal/4f40ed313de84576affed9caa97e52e8) is 8M, max 76.6M, 68.6M free. May 14 23:55:20.878019 systemd-modules-load[239]: Inserted module 'overlay' May 14 23:55:20.897341 systemd-modules-load[239]: Inserted module 'br_netfilter' May 14 23:55:20.917528 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:55:20.920607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:55:20.924830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:55:20.928263 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:55:20.936624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:55:20.937958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:55:20.939823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:55:20.943225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:55:20.953822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:55:20.955469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:55:20.961150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:55:20.969751 dracut-cmdline[272]: dracut-dracut-053 May 14 23:55:20.974242 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:55:21.002814 systemd-resolved[274]: Positive Trust Anchors: May 14 23:55:21.002829 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:55:21.002860 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:55:21.013261 systemd-resolved[274]: Defaulting to hostname 'linux'. May 14 23:55:21.014334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:55:21.015062 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:55:21.075465 kernel: SCSI subsystem initialized May 14 23:55:21.080456 kernel: Loading iSCSI transport class v2.0-870. May 14 23:55:21.087453 kernel: iscsi: registered transport (tcp) May 14 23:55:21.100450 kernel: iscsi: registered transport (qla4xxx) May 14 23:55:21.100533 kernel: QLogic iSCSI HBA Driver May 14 23:55:21.146151 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:55:21.151579 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:55:21.179442 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:55:21.179530 kernel: device-mapper: uevent: version 1.0.3 May 14 23:55:21.179557 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:55:21.228460 kernel: raid6: neonx8 gen() 15685 MB/s May 14 23:55:21.245439 kernel: raid6: neonx4 gen() 14393 MB/s May 14 23:55:21.262444 kernel: raid6: neonx2 gen() 13163 MB/s May 14 23:55:21.279453 kernel: raid6: neonx1 gen() 10444 MB/s May 14 23:55:21.296458 kernel: raid6: int64x8 gen() 6750 MB/s May 14 23:55:21.313489 kernel: raid6: int64x4 gen() 7322 MB/s May 14 23:55:21.330460 kernel: raid6: int64x2 gen() 6076 MB/s May 14 23:55:21.347468 kernel: raid6: int64x1 gen() 5039 MB/s May 14 23:55:21.347536 kernel: raid6: using algorithm neonx8 gen() 15685 MB/s May 14 23:55:21.364463 kernel: raid6: .... xor() 11827 MB/s, rmw enabled May 14 23:55:21.364543 kernel: raid6: using neon recovery algorithm May 14 23:55:21.369451 kernel: xor: measuring software checksum speed May 14 23:55:21.369512 kernel: 8regs : 21641 MB/sec May 14 23:55:21.369532 kernel: 32regs : 21699 MB/sec May 14 23:55:21.369549 kernel: arm64_neon : 24440 MB/sec May 14 23:55:21.370440 kernel: xor: using function: arm64_neon (24440 MB/sec) May 14 23:55:21.419463 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:55:21.433098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:55:21.440692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:55:21.457576 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 14 23:55:21.461724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:55:21.472565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:55:21.487243 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation May 14 23:55:21.524514 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:55:21.530579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:55:21.579969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:55:21.587607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:55:21.609716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:55:21.611074 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:55:21.612580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:55:21.613189 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:55:21.620618 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:55:21.635287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:55:21.678084 kernel: scsi host0: Virtio SCSI HBA May 14 23:55:21.682436 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:55:21.682521 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 23:55:21.711722 kernel: ACPI: bus type USB registered May 14 23:55:21.711777 kernel: usbcore: registered new interface driver usbfs May 14 23:55:21.711788 kernel: usbcore: registered new interface driver hub May 14 23:55:21.713847 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:55:21.713988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:55:21.715456 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:55:21.717972 kernel: usbcore: registered new device driver usb May 14 23:55:21.716442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:55:21.716569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:55:21.720026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:55:21.728656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:55:21.742435 kernel: sr 0:0:0:0: Power-on or device reset occurred May 14 23:55:21.750447 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 14 23:55:21.750547 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:55:21.750557 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 23:55:21.750659 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 14 23:55:21.750746 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 14 23:55:21.750827 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 14 23:55:21.750927 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 23:55:21.751007 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 14 23:55:21.751084 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 14 23:55:21.746048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:55:21.754595 kernel: hub 1-0:1.0: USB hub found May 14 23:55:21.754772 kernel: hub 1-0:1.0: 4 ports detected May 14 23:55:21.755879 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 23:55:21.756725 kernel: hub 2-0:1.0: USB hub found May 14 23:55:21.758464 kernel: hub 2-0:1.0: 4 ports detected May 14 23:55:21.758653 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:55:21.783857 kernel: sd 0:0:0:1: Power-on or device reset occurred May 14 23:55:21.784086 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 14 23:55:21.784543 kernel: sd 0:0:0:1: [sda] Write Protect is off May 14 23:55:21.785482 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 14 23:55:21.785667 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 23:55:21.789467 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:55:21.793854 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:55:21.793941 kernel: GPT:17805311 != 80003071 May 14 23:55:21.793984 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:55:21.794020 kernel: GPT:17805311 != 80003071 May 14 23:55:21.794043 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:55:21.794065 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:55:21.795455 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 14 23:55:21.838462 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (506) May 14 23:55:21.838513 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (517) May 14 23:55:21.850941 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 23:55:21.860153 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 23:55:21.869282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 23:55:21.877919 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 23:55:21.878590 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 23:55:21.892696 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:55:21.900051 disk-uuid[579]: Primary Header is updated. May 14 23:55:21.900051 disk-uuid[579]: Secondary Entries is updated. May 14 23:55:21.900051 disk-uuid[579]: Secondary Header is updated. May 14 23:55:21.907484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:55:21.913438 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:55:21.997437 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 14 23:55:22.131176 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 14 23:55:22.131233 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 14 23:55:22.132421 kernel: usbcore: registered new interface driver usbhid May 14 23:55:22.132448 kernel: usbhid: USB HID core driver May 14 23:55:22.238531 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 14 23:55:22.367443 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 14 23:55:22.420468 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 14 23:55:22.922361 disk-uuid[580]: The operation has completed successfully. May 14 23:55:22.922990 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:55:22.969895 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:55:22.970005 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:55:23.006724 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:55:23.011171 sh[595]: Success May 14 23:55:23.023457 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:55:23.078773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:55:23.088605 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:55:23.091483 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:55:23.122311 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:55:23.122383 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:55:23.122395 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:55:23.122419 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:55:23.122438 kernel: BTRFS info (device dm-0): using free space tree May 14 23:55:23.128421 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 23:55:23.129834 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:55:23.130986 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:55:23.137580 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:55:23.141722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:55:23.161560 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:55:23.161633 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:55:23.161644 kernel: BTRFS info (device sda6): using free space tree May 14 23:55:23.168627 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:55:23.168696 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:55:23.174461 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:55:23.177166 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:55:23.182664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:55:23.259223 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:55:23.267599 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:55:23.273716 ignition[678]: Ignition 2.20.0 May 14 23:55:23.273732 ignition[678]: Stage: fetch-offline May 14 23:55:23.273765 ignition[678]: no configs at "/usr/lib/ignition/base.d" May 14 23:55:23.276660 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:55:23.273783 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:23.273951 ignition[678]: parsed url from cmdline: "" May 14 23:55:23.273955 ignition[678]: no config URL provided May 14 23:55:23.273959 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:55:23.273966 ignition[678]: no config at "/usr/lib/ignition/user.ign" May 14 23:55:23.273971 ignition[678]: failed to fetch config: resource requires networking May 14 23:55:23.274132 ignition[678]: Ignition finished successfully May 14 23:55:23.302462 systemd-networkd[779]: lo: Link UP May 14 23:55:23.302469 systemd-networkd[779]: lo: Gained carrier May 14 23:55:23.304509 systemd-networkd[779]: Enumeration completed May 14 23:55:23.305238 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:23.305243 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:55:23.305853 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:55:23.306543 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:23.306546 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:55:23.307134 systemd-networkd[779]: eth0: Link UP May 14 23:55:23.307137 systemd-networkd[779]: eth0: Gained carrier May 14 23:55:23.307143 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:23.307480 systemd[1]: Reached target network.target - Network. May 14 23:55:23.311632 systemd-networkd[779]: eth1: Link UP May 14 23:55:23.311634 systemd-networkd[779]: eth1: Gained carrier May 14 23:55:23.311641 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:23.318709 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:55:23.332659 ignition[784]: Ignition 2.20.0 May 14 23:55:23.332676 ignition[784]: Stage: fetch May 14 23:55:23.332956 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 14 23:55:23.332971 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:23.333093 ignition[784]: parsed url from cmdline: "" May 14 23:55:23.333098 ignition[784]: no config URL provided May 14 23:55:23.333104 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:55:23.333115 ignition[784]: no config at "/usr/lib/ignition/user.ign" May 14 23:55:23.333240 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 14 23:55:23.335980 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 23:55:23.345541 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:55:23.373522 systemd-networkd[779]: eth0: DHCPv4 address 91.99.8.230/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 23:55:23.536535 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 14 23:55:23.541797 ignition[784]: GET result: OK May 14 23:55:23.542016 ignition[784]: parsing config with SHA512: ecf8a0bb8eacbfb43091d6a3f045542269aec0ca2892086a71c5b97f83903949bfd558ed7d7af1851d02061326a9c48cf6095d6dd2544118f83f13e989c94220 May 14 23:55:23.547910 unknown[784]: fetched base config from "system" May 14 23:55:23.547921 unknown[784]: fetched base config from "system" May 14 23:55:23.548275 ignition[784]: fetch: fetch complete May 14 23:55:23.547926 unknown[784]: fetched user config from "hetzner" May 14 23:55:23.548280 ignition[784]: fetch: fetch passed May 14 23:55:23.550397 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:55:23.548323 ignition[784]: Ignition finished successfully May 14 23:55:23.555635 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:55:23.570247 ignition[791]: Ignition 2.20.0 May 14 23:55:23.570256 ignition[791]: Stage: kargs May 14 23:55:23.570464 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 14 23:55:23.570474 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:23.571477 ignition[791]: kargs: kargs passed May 14 23:55:23.571526 ignition[791]: Ignition finished successfully May 14 23:55:23.573953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:55:23.578593 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:55:23.590455 ignition[798]: Ignition 2.20.0 May 14 23:55:23.590468 ignition[798]: Stage: disks May 14 23:55:23.590656 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 14 23:55:23.590668 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:23.591733 ignition[798]: disks: disks passed May 14 23:55:23.591783 ignition[798]: Ignition finished successfully May 14 23:55:23.593664 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:55:23.594574 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:55:23.595318 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:55:23.596329 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:55:23.597380 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:55:23.598472 systemd[1]: Reached target basic.target - Basic System. May 14 23:55:23.610756 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:55:23.628754 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 23:55:23.631869 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:55:23.639558 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:55:23.681443 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:55:23.681765 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:55:23.683059 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:55:23.688514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:55:23.692208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:55:23.695688 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 23:55:23.696550 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:55:23.696588 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:55:23.706663 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:55:23.712587 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) May 14 23:55:23.712614 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:55:23.712624 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:55:23.712634 kernel: BTRFS info (device sda6): using free space tree May 14 23:55:23.717441 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:55:23.717502 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:55:23.720592 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:55:23.725958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:55:23.761023 coreos-metadata[817]: May 14 23:55:23.760 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 14 23:55:23.762708 coreos-metadata[817]: May 14 23:55:23.762 INFO Fetch successful May 14 23:55:23.764984 coreos-metadata[817]: May 14 23:55:23.764 INFO wrote hostname ci-4230-1-1-n-df83517ae5 to /sysroot/etc/hostname May 14 23:55:23.768955 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:55:23.770105 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:55:23.779130 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory May 14 23:55:23.785695 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:55:23.790616 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:55:23.888097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:55:23.895573 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:55:23.898669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:55:23.909437 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:55:23.930666 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:55:23.935494 ignition[932]: INFO : Ignition 2.20.0 May 14 23:55:23.935494 ignition[932]: INFO : Stage: mount May 14 23:55:23.935494 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:55:23.935494 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:23.935494 ignition[932]: INFO : mount: mount passed May 14 23:55:23.935494 ignition[932]: INFO : Ignition finished successfully May 14 23:55:23.936683 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:55:23.946622 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:55:24.121799 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:55:24.129687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:55:24.140444 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) May 14 23:55:24.142891 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:55:24.142982 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:55:24.143010 kernel: BTRFS info (device sda6): using free space tree May 14 23:55:24.146460 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 23:55:24.146523 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:55:24.149950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:55:24.177351 ignition[960]: INFO : Ignition 2.20.0 May 14 23:55:24.177351 ignition[960]: INFO : Stage: files May 14 23:55:24.178565 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:55:24.178565 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:24.178565 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 14 23:55:24.181436 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:55:24.181436 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:55:24.182961 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:55:24.182961 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:55:24.182961 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:55:24.185637 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:55:24.185637 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:55:24.182960 unknown[960]: wrote ssh authorized keys file for user: core May 14 23:55:24.272870 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:55:24.474557 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:55:24.474557 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:55:24.477371 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:55:24.977582 systemd-networkd[779]: eth1: Gained IPv6LL May 14 23:55:25.026949 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:55:25.099257 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:55:25.100459 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 23:55:25.233666 systemd-networkd[779]: eth0: Gained IPv6LL May 14 23:55:25.607749 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:55:25.760208 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:55:25.760208 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 23:55:25.763611 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 23:55:25.763611 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:55:25.763611 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:55:25.763611 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:55:25.763611 ignition[960]: INFO : files: files passed May 14 23:55:25.763611 ignition[960]: INFO : Ignition finished successfully May 14 23:55:25.764144 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:55:25.775492 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:55:25.777806 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:55:25.779951 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:55:25.782457 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:55:25.791299 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:55:25.791299 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:55:25.794135 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:55:25.794886 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:55:25.796727 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:55:25.805702 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:55:25.831614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:55:25.831838 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:55:25.834762 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:55:25.835694 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:55:25.836696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:55:25.838102 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:55:25.863272 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:55:25.869619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:55:25.880894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:55:25.882310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:55:25.883709 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:55:25.884752 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:55:25.884899 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:55:25.886958 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:55:25.887568 systemd[1]: Stopped target basic.target - Basic System. May 14 23:55:25.888921 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:55:25.890304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:55:25.892055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:55:25.893322 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:55:25.894460 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:55:25.895550 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:55:25.896716 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:55:25.897734 systemd[1]: Stopped target swap.target - Swaps. May 14 23:55:25.898607 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:55:25.898731 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:55:25.899933 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:55:25.900599 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:55:25.901645 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:55:25.901718 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:55:25.902792 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:55:25.902930 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:55:25.904386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:55:25.904522 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:55:25.905909 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:55:25.906002 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:55:25.906902 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 23:55:25.906992 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:55:25.916780 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:55:25.918131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:55:25.918431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:55:25.922648 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:55:25.924078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:55:25.924201 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:55:25.926364 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:55:25.926945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:55:25.937525 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:55:25.939500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:55:25.943389 ignition[1013]: INFO : Ignition 2.20.0 May 14 23:55:25.943389 ignition[1013]: INFO : Stage: umount May 14 23:55:25.943389 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:55:25.943389 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 23:55:25.943389 ignition[1013]: INFO : umount: umount passed May 14 23:55:25.943389 ignition[1013]: INFO : Ignition finished successfully May 14 23:55:25.945096 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:55:25.945188 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:55:25.947935 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:55:25.948040 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:55:25.949920 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:55:25.949969 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:55:25.960690 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:55:25.966099 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:55:25.967637 systemd[1]: Stopped target network.target - Network. May 14 23:55:25.968581 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:55:25.968644 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:55:25.969327 systemd[1]: Stopped target paths.target - Path Units. May 14 23:55:25.969855 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:55:25.976488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:55:25.982993 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:55:25.984008 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:55:25.984682 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:55:25.984731 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:55:25.985672 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:55:25.985719 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:55:25.986447 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:55:25.986515 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:55:25.987320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:55:25.987367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:55:25.990778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:55:25.992134 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:55:25.996147 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:55:25.996695 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:55:25.996793 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:55:25.997969 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:55:25.998057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:55:26.005675 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:55:26.005815 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:55:26.011146 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:55:26.011721 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:55:26.011909 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:55:26.016506 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:55:26.017198 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:55:26.017255 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:55:26.023567 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:55:26.024061 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:55:26.024121 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:55:26.025624 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:55:26.025667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:55:26.026357 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:55:26.026398 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:55:26.027815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:55:26.027897 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:55:26.029157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:55:26.031661 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:55:26.031719 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:55:26.041483 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:55:26.041602 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:55:26.051609 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:55:26.051901 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:55:26.054446 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:55:26.054494 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:55:26.055235 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:55:26.055272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:55:26.055861 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:55:26.055909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:55:26.057490 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:55:26.057539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:55:26.058998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:55:26.059041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:55:26.066663 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:55:26.067234 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:55:26.067293 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:55:26.069585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:55:26.069628 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:55:26.071569 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:55:26.071621 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:55:26.076790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:55:26.076950 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:55:26.078256 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:55:26.083659 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:55:26.096359 systemd[1]: Switching root. May 14 23:55:26.124287 systemd-journald[238]: Journal stopped May 14 23:55:26.957900 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 14 23:55:26.957958 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:55:26.957971 kernel: SELinux: policy capability open_perms=1 May 14 23:55:26.957982 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:55:26.957992 kernel: SELinux: policy capability always_check_network=0 May 14 23:55:26.958001 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:55:26.958011 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:55:26.958020 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:55:26.958030 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:55:26.958040 kernel: audit: type=1403 audit(1747266926.230:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:55:26.958051 systemd[1]: Successfully loaded SELinux policy in 37.498ms. May 14 23:55:26.958077 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.925ms. May 14 23:55:26.958093 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:55:26.958104 systemd[1]: Detected virtualization kvm. May 14 23:55:26.958115 systemd[1]: Detected architecture arm64. May 14 23:55:26.958126 systemd[1]: Detected first boot. May 14 23:55:26.958136 systemd[1]: Hostname set to . May 14 23:55:26.958148 systemd[1]: Initializing machine ID from VM UUID. May 14 23:55:26.958160 zram_generator::config[1058]: No configuration found. May 14 23:55:26.958173 kernel: NET: Registered PF_VSOCK protocol family May 14 23:55:26.958185 systemd[1]: Populated /etc with preset unit settings. May 14 23:55:26.958196 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:55:26.958207 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:55:26.958217 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:55:26.958228 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:55:26.958238 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:55:26.958249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:55:26.958259 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:55:26.958271 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:55:26.958282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:55:26.958293 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:55:26.958303 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:55:26.958314 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:55:26.958325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:55:26.958335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:55:26.958346 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:55:26.958357 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:55:26.958369 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:55:26.958380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:55:26.958391 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:55:26.958413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:55:26.958425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:55:26.961900 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:55:26.961935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:55:26.961947 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:55:26.961959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:55:26.961969 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:55:26.961980 systemd[1]: Reached target slices.target - Slice Units. May 14 23:55:26.961991 systemd[1]: Reached target swap.target - Swaps. May 14 23:55:26.962002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:55:26.962014 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:55:26.962029 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:55:26.962042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:55:26.962053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:55:26.962063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:55:26.962075 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:55:26.962086 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:55:26.962096 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:55:26.962109 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:55:26.962120 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:55:26.962131 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:55:26.962142 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:55:26.962154 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:55:26.962165 systemd[1]: Reached target machines.target - Containers. May 14 23:55:26.962177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:55:26.962189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:55:26.962200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:55:26.962212 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:55:26.962222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:55:26.962233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:55:26.962248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:55:26.962259 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:55:26.962270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:55:26.962281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:55:26.962291 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:55:26.962303 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:55:26.962314 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:55:26.962325 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:55:26.962337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:55:26.962348 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:55:26.962358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:55:26.962371 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:55:26.962382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:55:26.962393 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:55:26.964577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:55:26.964614 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:55:26.964627 systemd[1]: Stopped verity-setup.service. May 14 23:55:26.964638 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:55:26.964654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:55:26.964666 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:55:26.964676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:55:26.964687 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:55:26.964700 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:55:26.964711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:55:26.964724 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:55:26.964765 systemd-journald[1129]: Collecting audit messages is disabled. May 14 23:55:26.964788 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:55:26.964799 kernel: loop: module loaded May 14 23:55:26.964810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:55:26.964821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:55:26.964832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:55:26.964879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:55:26.964894 systemd-journald[1129]: Journal started May 14 23:55:26.964917 systemd-journald[1129]: Runtime Journal (/run/log/journal/4f40ed313de84576affed9caa97e52e8) is 8M, max 76.6M, 68.6M free. May 14 23:55:26.739321 systemd[1]: Queued start job for default target multi-user.target. May 14 23:55:26.753635 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 23:55:26.754137 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:55:26.973806 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:55:26.968598 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:55:26.968759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:55:26.970768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:55:26.977362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:55:26.984954 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:55:26.992426 kernel: fuse: init (API version 7.39) May 14 23:55:27.001498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:55:27.002438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:55:27.005611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:55:27.010475 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:55:27.011524 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:55:27.012813 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:55:27.014114 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:55:27.027681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:55:27.028973 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:55:27.029044 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:55:27.038175 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:55:27.038424 kernel: ACPI: bus type drm_connector registered May 14 23:55:27.043721 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:55:27.047045 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:55:27.048779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:55:27.052617 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:55:27.062178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:55:27.065532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:55:27.068698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:55:27.074193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:55:27.078833 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:55:27.080240 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:55:27.080393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:55:27.081774 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:55:27.084558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:55:27.086658 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:55:27.088368 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:55:27.102641 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:55:27.103752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:55:27.115769 systemd-journald[1129]: Time spent on flushing to /var/log/journal/4f40ed313de84576affed9caa97e52e8 is 46.469ms for 1146 entries. May 14 23:55:27.115769 systemd-journald[1129]: System Journal (/var/log/journal/4f40ed313de84576affed9caa97e52e8) is 8M, max 584.8M, 576.8M free. May 14 23:55:27.189478 systemd-journald[1129]: Received client request to flush runtime journal. May 14 23:55:27.189562 kernel: loop0: detected capacity change from 0 to 8 May 14 23:55:27.189587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:55:27.189609 kernel: loop1: detected capacity change from 0 to 113512 May 14 23:55:27.117208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:55:27.124708 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:55:27.128810 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:55:27.143700 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:55:27.147994 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:55:27.193324 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:55:27.210461 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:55:27.211699 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:55:27.219655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:55:27.233436 kernel: loop2: detected capacity change from 0 to 189592 May 14 23:55:27.247869 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 14 23:55:27.248339 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 14 23:55:27.272499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:55:27.289463 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:55:27.328903 kernel: loop4: detected capacity change from 0 to 8 May 14 23:55:27.333513 kernel: loop5: detected capacity change from 0 to 113512 May 14 23:55:27.352263 kernel: loop6: detected capacity change from 0 to 189592 May 14 23:55:27.383515 kernel: loop7: detected capacity change from 0 to 123192 May 14 23:55:27.397083 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 14 23:55:27.398899 (sd-merge)[1203]: Merged extensions into '/usr'. May 14 23:55:27.403813 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:55:27.403825 systemd[1]: Reloading... May 14 23:55:27.537517 zram_generator::config[1232]: No configuration found. May 14 23:55:27.560443 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:55:27.657963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:55:27.720552 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:55:27.720745 systemd[1]: Reloading finished in 315 ms. May 14 23:55:27.739781 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:55:27.740934 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:55:27.741945 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:55:27.755724 systemd[1]: Starting ensure-sysext.service... May 14 23:55:27.759587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:55:27.762671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:55:27.774642 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 14 23:55:27.774664 systemd[1]: Reloading... May 14 23:55:27.790778 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:55:27.791012 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:55:27.791687 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:55:27.791914 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 23:55:27.791963 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 23:55:27.796518 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:55:27.796527 systemd-tmpfiles[1270]: Skipping /boot May 14 23:55:27.805974 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:55:27.806091 systemd-tmpfiles[1270]: Skipping /boot May 14 23:55:27.809049 systemd-udevd[1271]: Using default interface naming scheme 'v255'. May 14 23:55:27.885104 zram_generator::config[1298]: No configuration found. May 14 23:55:28.029478 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1305) May 14 23:55:28.049386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:55:28.099479 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:55:28.144019 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:55:28.145160 systemd[1]: Reloading finished in 370 ms. May 14 23:55:28.154295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:55:28.174462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:55:28.206561 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 14 23:55:28.222023 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:55:28.230735 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:55:28.233479 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 14 23:55:28.233536 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 23:55:28.233555 kernel: [drm] features: -context_init May 14 23:55:28.232600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:55:28.235437 kernel: [drm] number of scanouts: 1 May 14 23:55:28.235491 kernel: [drm] number of cap sets: 0 May 14 23:55:28.235820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:55:28.239489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:55:28.243591 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 14 23:55:28.243048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:55:28.244679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:55:28.244793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:55:28.254443 kernel: Console: switching to colour frame buffer device 160x50 May 14 23:55:28.258161 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:55:28.263762 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 23:55:28.269511 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:55:28.274470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:55:28.277913 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:55:28.288529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:55:28.291427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:55:28.293866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:55:28.297784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:55:28.299672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:55:28.300076 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:55:28.313448 systemd[1]: Finished ensure-sysext.service. May 14 23:55:28.317658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 23:55:28.327064 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:55:28.328558 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:55:28.330274 augenrules[1413]: No rules May 14 23:55:28.331919 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:55:28.332149 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:55:28.335205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:55:28.345788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:55:28.347375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:55:28.349607 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:55:28.350680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:55:28.350753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:55:28.350817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:55:28.353641 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:55:28.359625 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:55:28.363627 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:55:28.366578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:55:28.368875 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:55:28.369054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:55:28.373570 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:55:28.380039 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:55:28.381112 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:55:28.387879 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:55:28.388530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:55:28.399085 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:55:28.413376 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:55:28.429926 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:55:28.441950 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:55:28.442888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:55:28.446624 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:55:28.463497 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:55:28.490435 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:55:28.496458 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:55:28.535127 systemd-resolved[1396]: Positive Trust Anchors: May 14 23:55:28.535147 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:55:28.535178 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:55:28.539676 systemd-resolved[1396]: Using system hostname 'ci-4230-1-1-n-df83517ae5'. May 14 23:55:28.541321 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:55:28.542380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:55:28.544316 systemd-networkd[1393]: lo: Link UP May 14 23:55:28.546768 systemd-networkd[1393]: lo: Gained carrier May 14 23:55:28.551732 systemd-networkd[1393]: Enumeration completed May 14 23:55:28.551855 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:55:28.552249 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:28.552253 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:55:28.552855 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:28.552859 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:55:28.555509 systemd-networkd[1393]: eth0: Link UP May 14 23:55:28.555518 systemd-networkd[1393]: eth0: Gained carrier May 14 23:55:28.555542 systemd[1]: Reached target network.target - Network. May 14 23:55:28.555544 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:28.559809 systemd-networkd[1393]: eth1: Link UP May 14 23:55:28.559818 systemd-networkd[1393]: eth1: Gained carrier May 14 23:55:28.559876 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:55:28.571723 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:55:28.575213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:55:28.576718 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:55:28.578367 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:55:28.579678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:55:28.580541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:55:28.581191 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:55:28.581884 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:55:28.581917 systemd[1]: Reached target paths.target - Path Units. May 14 23:55:28.582497 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:55:28.583226 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:55:28.583914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:55:28.584559 systemd[1]: Reached target timers.target - Timer Units. May 14 23:55:28.586385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:55:28.590684 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:55:28.593319 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:55:28.594310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:55:28.595018 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:55:28.595145 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:55:28.597532 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 23:55:28.603105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:55:28.604307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:55:28.608458 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:55:28.609278 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:55:28.610507 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:55:28.611211 systemd[1]: Reached target basic.target - Basic System. May 14 23:55:28.612005 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:55:28.612043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:55:28.613501 systemd-networkd[1393]: eth0: DHCPv4 address 91.99.8.230/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 23:55:28.614230 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 23:55:28.616612 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:55:28.627318 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:55:28.631711 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:55:28.635892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:55:28.641210 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:55:28.642311 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:55:28.645870 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:55:28.651205 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:55:28.655169 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 14 23:55:28.659792 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:55:28.664426 jq[1460]: false May 14 23:55:28.666332 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:55:28.672095 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:55:28.674508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:55:28.675379 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:55:28.687515 coreos-metadata[1456]: May 14 23:55:28.687 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 14 23:55:28.688777 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:55:28.690811 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:55:28.694113 dbus-daemon[1457]: [system] SELinux support is enabled May 14 23:55:28.695700 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:55:28.707880 coreos-metadata[1456]: May 14 23:55:28.706 INFO Fetch successful May 14 23:55:28.707880 coreos-metadata[1456]: May 14 23:55:28.707 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 14 23:55:28.701990 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:55:28.708105 jq[1471]: true May 14 23:55:28.702220 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:55:28.712784 extend-filesystems[1461]: Found loop4 May 14 23:55:28.712784 extend-filesystems[1461]: Found loop5 May 14 23:55:28.712784 extend-filesystems[1461]: Found loop6 May 14 23:55:28.712784 extend-filesystems[1461]: Found loop7 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda May 14 23:55:28.712784 extend-filesystems[1461]: Found sda1 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda2 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda3 May 14 23:55:28.712784 extend-filesystems[1461]: Found usr May 14 23:55:28.712784 extend-filesystems[1461]: Found sda4 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda6 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda7 May 14 23:55:28.712784 extend-filesystems[1461]: Found sda9 May 14 23:55:28.712784 extend-filesystems[1461]: Checking size of /dev/sda9 May 14 23:55:28.718691 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:55:28.756984 coreos-metadata[1456]: May 14 23:55:28.720 INFO Fetch successful May 14 23:55:28.757015 extend-filesystems[1461]: Resized partition /dev/sda9 May 14 23:55:28.718739 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:55:28.761393 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) May 14 23:55:28.767589 tar[1474]: linux-arm64/helm May 14 23:55:28.771456 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 14 23:55:28.722553 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:55:28.722573 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:55:28.726898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:55:28.773110 jq[1476]: true May 14 23:55:28.727098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:55:28.758917 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:55:28.784611 update_engine[1470]: I20250514 23:55:28.784467 1470 main.cc:92] Flatcar Update Engine starting May 14 23:55:28.796959 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:55:28.797209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:55:28.798801 update_engine[1470]: I20250514 23:55:28.798746 1470 update_check_scheduler.cc:74] Next update check in 8m45s May 14 23:55:28.802252 systemd[1]: Started update-engine.service - Update Engine. May 14 23:55:28.816712 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:55:28.907432 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1302) May 14 23:55:28.910443 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:55:28.913501 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:55:28.920487 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 14 23:55:28.922541 systemd-logind[1468]: New seat seat0. May 14 23:55:28.937312 bash[1524]: Updated "/home/core/.ssh/authorized_keys" May 14 23:55:28.937960 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 14 23:55:28.937960 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 May 14 23:55:28.937960 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 14 23:55:28.937745 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:55:28.957584 extend-filesystems[1461]: Resized filesystem in /dev/sda9 May 14 23:55:28.957584 extend-filesystems[1461]: Found sr0 May 14 23:55:28.937763 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 14 23:55:28.938043 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:55:28.939357 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:55:28.942882 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:55:28.943077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:55:28.978866 systemd[1]: Starting sshkeys.service... May 14 23:55:29.014054 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 23:55:29.026780 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 23:55:29.063329 coreos-metadata[1537]: May 14 23:55:29.063 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 14 23:55:29.075112 coreos-metadata[1537]: May 14 23:55:29.074 INFO Fetch successful May 14 23:55:29.079392 unknown[1537]: wrote ssh authorized keys file for user: core May 14 23:55:29.102328 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:55:29.108693 update-ssh-keys[1544]: Updated "/home/core/.ssh/authorized_keys" May 14 23:55:29.110781 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 23:55:29.115287 systemd[1]: Finished sshkeys.service. May 14 23:55:29.207488 containerd[1488]: time="2025-05-14T23:55:29.204785520Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:55:29.271177 containerd[1488]: time="2025-05-14T23:55:29.270789400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.278675 containerd[1488]: time="2025-05-14T23:55:29.278631080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:55:29.279005 containerd[1488]: time="2025-05-14T23:55:29.278987440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:55:29.279225 containerd[1488]: time="2025-05-14T23:55:29.279207000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:55:29.280103 containerd[1488]: time="2025-05-14T23:55:29.280078040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:55:29.280184 containerd[1488]: time="2025-05-14T23:55:29.280170280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.280667 containerd[1488]: time="2025-05-14T23:55:29.280392440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:55:29.280763 containerd[1488]: time="2025-05-14T23:55:29.280747240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.281356 containerd[1488]: time="2025-05-14T23:55:29.281334160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:55:29.281704 containerd[1488]: time="2025-05-14T23:55:29.281686160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.281785 containerd[1488]: time="2025-05-14T23:55:29.281770440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:55:29.282184 containerd[1488]: time="2025-05-14T23:55:29.282105280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.282439 containerd[1488]: time="2025-05-14T23:55:29.282294800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.283335 containerd[1488]: time="2025-05-14T23:55:29.283235960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:55:29.284547 containerd[1488]: time="2025-05-14T23:55:29.284161240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:55:29.284547 containerd[1488]: time="2025-05-14T23:55:29.284196520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:55:29.284547 containerd[1488]: time="2025-05-14T23:55:29.284303040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:55:29.284547 containerd[1488]: time="2025-05-14T23:55:29.284355720Z" level=info msg="metadata content store policy set" policy=shared May 14 23:55:29.292133 containerd[1488]: time="2025-05-14T23:55:29.292066120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.292386960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.292460640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.292498720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.292534960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.292855120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293466720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293672400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293706760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293752880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293790400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293851120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293886080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293917920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:55:29.295570 containerd[1488]: time="2025-05-14T23:55:29.293950560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.293979760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294007280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294033680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294076120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294113680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294142520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294172040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294199240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294227120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294256960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294286400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294323760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294357520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:55:29.296335 containerd[1488]: time="2025-05-14T23:55:29.294387120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:55:29.297919 containerd[1488]: time="2025-05-14T23:55:29.297897560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:55:29.298006 containerd[1488]: time="2025-05-14T23:55:29.297992480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:55:29.298062 containerd[1488]: time="2025-05-14T23:55:29.298050640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:55:29.298170 containerd[1488]: time="2025-05-14T23:55:29.298154280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:55:29.298427 containerd[1488]: time="2025-05-14T23:55:29.298398120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:55:29.298498 containerd[1488]: time="2025-05-14T23:55:29.298484800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:55:29.298732 containerd[1488]: time="2025-05-14T23:55:29.298716640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:55:29.299055 containerd[1488]: time="2025-05-14T23:55:29.299033280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:55:29.299132 containerd[1488]: time="2025-05-14T23:55:29.299116720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:55:29.299185 containerd[1488]: time="2025-05-14T23:55:29.299171880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:55:29.299230 containerd[1488]: time="2025-05-14T23:55:29.299218680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:55:29.299286 containerd[1488]: time="2025-05-14T23:55:29.299274280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:55:29.299337 containerd[1488]: time="2025-05-14T23:55:29.299325440Z" level=info msg="NRI interface is disabled by configuration." May 14 23:55:29.299472 containerd[1488]: time="2025-05-14T23:55:29.299449240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:55:29.301257 containerd[1488]: time="2025-05-14T23:55:29.299903120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:55:29.301257 containerd[1488]: time="2025-05-14T23:55:29.299963880Z" level=info msg="Connect containerd service" May 14 23:55:29.301257 containerd[1488]: time="2025-05-14T23:55:29.300009040Z" level=info msg="using legacy CRI server" May 14 23:55:29.301257 containerd[1488]: time="2025-05-14T23:55:29.300016240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:55:29.301257 containerd[1488]: time="2025-05-14T23:55:29.300273080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:55:29.303047 containerd[1488]: time="2025-05-14T23:55:29.303008600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:55:29.303534 containerd[1488]: time="2025-05-14T23:55:29.303502240Z" level=info msg="Start subscribing containerd event" May 14 23:55:29.305279 containerd[1488]: time="2025-05-14T23:55:29.304916920Z" level=info msg="Start recovering state" May 14 23:55:29.305279 containerd[1488]: time="2025-05-14T23:55:29.305016200Z" level=info msg="Start event monitor" May 14 23:55:29.305279 containerd[1488]: time="2025-05-14T23:55:29.305027320Z" level=info msg="Start snapshots syncer" May 14 23:55:29.305279 containerd[1488]: time="2025-05-14T23:55:29.305039240Z" level=info msg="Start cni network conf syncer for default" May 14 23:55:29.305279 containerd[1488]: time="2025-05-14T23:55:29.305046640Z" level=info msg="Start streaming server" May 14 23:55:29.310140 containerd[1488]: time="2025-05-14T23:55:29.310111680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:55:29.310348 containerd[1488]: time="2025-05-14T23:55:29.310329400Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:55:29.310766 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:55:29.312099 containerd[1488]: time="2025-05-14T23:55:29.312078160Z" level=info msg="containerd successfully booted in 0.110314s" May 14 23:55:29.451111 tar[1474]: linux-arm64/LICENSE May 14 23:55:29.451284 tar[1474]: linux-arm64/README.md May 14 23:55:29.464131 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:55:29.588145 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:55:29.614238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:55:29.621700 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:55:29.628529 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:55:29.628946 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:55:29.636885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:55:29.646529 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:55:29.652707 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:55:29.655737 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:55:29.657185 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:55:29.969700 systemd-networkd[1393]: eth0: Gained IPv6LL May 14 23:55:29.970489 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 23:55:29.973783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:55:29.975209 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:55:29.988024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:55:29.991690 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:55:30.021239 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:55:30.354167 systemd-networkd[1393]: eth1: Gained IPv6LL May 14 23:55:30.354900 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 23:55:30.673709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:55:30.675878 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:55:30.676264 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:55:30.681891 systemd[1]: Startup finished in 763ms (kernel) + 5.540s (initrd) + 4.488s (userspace) = 10.792s. May 14 23:55:31.168567 kubelet[1587]: E0514 23:55:31.168469 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:55:31.171208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:55:31.171472 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:55:31.173566 systemd[1]: kubelet.service: Consumed 801ms CPU time, 231.8M memory peak. May 14 23:55:41.421841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:55:41.430788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:55:41.533226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:55:41.537837 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:55:41.581717 kubelet[1606]: E0514 23:55:41.581586 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:55:41.584491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:55:41.584656 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:55:41.585241 systemd[1]: kubelet.service: Consumed 133ms CPU time, 94.9M memory peak. May 14 23:55:51.602568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:55:51.619809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:55:51.733636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:55:51.735679 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:55:51.779835 kubelet[1621]: E0514 23:55:51.779771 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:55:51.782852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:55:51.783021 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:55:51.783552 systemd[1]: kubelet.service: Consumed 135ms CPU time, 96.8M memory peak. May 14 23:56:00.742472 systemd-timesyncd[1422]: Contacted time server 78.47.93.191:123 (2.flatcar.pool.ntp.org). May 14 23:56:00.742548 systemd-timesyncd[1422]: Initial clock synchronization to Wed 2025-05-14 23:56:00.885651 UTC. May 14 23:56:01.852468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:56:01.859769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:01.971724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:01.973612 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:02.017233 kubelet[1636]: E0514 23:56:02.017173 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:02.019211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:02.019337 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:02.019629 systemd[1]: kubelet.service: Consumed 132ms CPU time, 92M memory peak. May 14 23:56:12.102630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:56:12.108717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:12.218368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:12.223278 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:12.267076 kubelet[1651]: E0514 23:56:12.267007 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:12.269987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:12.270221 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:12.271001 systemd[1]: kubelet.service: Consumed 141ms CPU time, 96.5M memory peak. May 14 23:56:13.921576 update_engine[1470]: I20250514 23:56:13.920598 1470 update_attempter.cc:509] Updating boot flags... May 14 23:56:13.968460 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1667) May 14 23:56:14.026619 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1663) May 14 23:56:14.078574 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1663) May 14 23:56:22.352437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 23:56:22.359821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:22.461554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:22.466330 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:22.508378 kubelet[1687]: E0514 23:56:22.508237 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:22.511659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:22.511902 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:22.512704 systemd[1]: kubelet.service: Consumed 136ms CPU time, 94.4M memory peak. May 14 23:56:32.602663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 23:56:32.614767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:32.715919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:32.729327 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:32.771940 kubelet[1702]: E0514 23:56:32.771891 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:32.774325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:32.774511 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:32.774810 systemd[1]: kubelet.service: Consumed 132ms CPU time, 94.3M memory peak. May 14 23:56:42.852609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 23:56:42.865791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:42.976621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:42.978661 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:43.019471 kubelet[1717]: E0514 23:56:43.019391 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:43.021888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:43.022155 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:43.022704 systemd[1]: kubelet.service: Consumed 133ms CPU time, 98.1M memory peak. May 14 23:56:53.102335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 23:56:53.109752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:56:53.214516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:56:53.225202 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:56:53.269981 kubelet[1732]: E0514 23:56:53.269923 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:56:53.272443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:56:53.272597 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:56:53.272885 systemd[1]: kubelet.service: Consumed 136ms CPU time, 94.7M memory peak. May 14 23:57:03.352521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 23:57:03.365755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:03.469502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:03.480323 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:57:03.525176 kubelet[1748]: E0514 23:57:03.525128 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:57:03.527602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:57:03.527737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:57:03.528249 systemd[1]: kubelet.service: Consumed 140ms CPU time, 95.9M memory peak. May 14 23:57:10.932249 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:57:10.938878 systemd[1]: Started sshd@0-91.99.8.230:22-147.75.109.163:34906.service - OpenSSH per-connection server daemon (147.75.109.163:34906). May 14 23:57:11.947636 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 34906 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:11.950426 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:11.958134 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:57:11.969931 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:57:11.980049 systemd-logind[1468]: New session 1 of user core. May 14 23:57:11.987579 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:57:12.003027 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:57:12.008078 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:57:12.011285 systemd-logind[1468]: New session c1 of user core. May 14 23:57:12.143791 systemd[1760]: Queued start job for default target default.target. May 14 23:57:12.159004 systemd[1760]: Created slice app.slice - User Application Slice. May 14 23:57:12.159247 systemd[1760]: Reached target paths.target - Paths. May 14 23:57:12.159451 systemd[1760]: Reached target timers.target - Timers. May 14 23:57:12.161618 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:57:12.174542 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:57:12.174701 systemd[1760]: Reached target sockets.target - Sockets. May 14 23:57:12.174872 systemd[1760]: Reached target basic.target - Basic System. May 14 23:57:12.174967 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:57:12.176281 systemd[1760]: Reached target default.target - Main User Target. May 14 23:57:12.176337 systemd[1760]: Startup finished in 157ms. May 14 23:57:12.186766 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:57:12.888969 systemd[1]: Started sshd@1-91.99.8.230:22-147.75.109.163:34914.service - OpenSSH per-connection server daemon (147.75.109.163:34914). May 14 23:57:13.602538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 23:57:13.619064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:13.751702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:13.753270 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:57:13.791601 kubelet[1781]: E0514 23:57:13.791495 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:57:13.794179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:57:13.794443 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:57:13.795590 systemd[1]: kubelet.service: Consumed 137ms CPU time, 94.3M memory peak. May 14 23:57:13.867136 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 34914 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:13.868604 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:13.875063 systemd-logind[1468]: New session 2 of user core. May 14 23:57:13.883064 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:57:14.544844 sshd[1788]: Connection closed by 147.75.109.163 port 34914 May 14 23:57:14.544075 sshd-session[1771]: pam_unix(sshd:session): session closed for user core May 14 23:57:14.548917 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. May 14 23:57:14.550093 systemd[1]: sshd@1-91.99.8.230:22-147.75.109.163:34914.service: Deactivated successfully. May 14 23:57:14.552091 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:57:14.553819 systemd-logind[1468]: Removed session 2. May 14 23:57:14.735959 systemd[1]: Started sshd@2-91.99.8.230:22-147.75.109.163:34918.service - OpenSSH per-connection server daemon (147.75.109.163:34918). May 14 23:57:15.747429 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 34918 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:15.749836 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:15.755951 systemd-logind[1468]: New session 3 of user core. May 14 23:57:15.762749 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:57:16.440944 sshd[1796]: Connection closed by 147.75.109.163 port 34918 May 14 23:57:16.440122 sshd-session[1794]: pam_unix(sshd:session): session closed for user core May 14 23:57:16.445595 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. May 14 23:57:16.446478 systemd[1]: sshd@2-91.99.8.230:22-147.75.109.163:34918.service: Deactivated successfully. May 14 23:57:16.449300 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:57:16.451772 systemd-logind[1468]: Removed session 3. May 14 23:57:16.626857 systemd[1]: Started sshd@3-91.99.8.230:22-147.75.109.163:34926.service - OpenSSH per-connection server daemon (147.75.109.163:34926). May 14 23:57:17.637508 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 34926 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:17.639696 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:17.645496 systemd-logind[1468]: New session 4 of user core. May 14 23:57:17.656768 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:57:18.334378 sshd[1804]: Connection closed by 147.75.109.163 port 34926 May 14 23:57:18.335351 sshd-session[1802]: pam_unix(sshd:session): session closed for user core May 14 23:57:18.340843 systemd[1]: sshd@3-91.99.8.230:22-147.75.109.163:34926.service: Deactivated successfully. May 14 23:57:18.345152 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:57:18.346575 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. May 14 23:57:18.347778 systemd-logind[1468]: Removed session 4. May 14 23:57:18.511888 systemd[1]: Started sshd@4-91.99.8.230:22-147.75.109.163:44284.service - OpenSSH per-connection server daemon (147.75.109.163:44284). May 14 23:57:19.495495 sshd[1810]: Accepted publickey for core from 147.75.109.163 port 44284 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:19.498382 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:19.504063 systemd-logind[1468]: New session 5 of user core. May 14 23:57:19.510695 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:57:20.024465 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:57:20.024795 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:57:20.042025 sudo[1813]: pam_unix(sudo:session): session closed for user root May 14 23:57:20.201765 sshd[1812]: Connection closed by 147.75.109.163 port 44284 May 14 23:57:20.201567 sshd-session[1810]: pam_unix(sshd:session): session closed for user core May 14 23:57:20.206326 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. May 14 23:57:20.206577 systemd[1]: sshd@4-91.99.8.230:22-147.75.109.163:44284.service: Deactivated successfully. May 14 23:57:20.209476 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:57:20.211625 systemd-logind[1468]: Removed session 5. May 14 23:57:20.377941 systemd[1]: Started sshd@5-91.99.8.230:22-147.75.109.163:44286.service - OpenSSH per-connection server daemon (147.75.109.163:44286). May 14 23:57:21.358276 sshd[1819]: Accepted publickey for core from 147.75.109.163 port 44286 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:21.360497 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:21.369525 systemd-logind[1468]: New session 6 of user core. May 14 23:57:21.378742 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:57:21.881245 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:57:21.881560 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:57:21.886379 sudo[1823]: pam_unix(sudo:session): session closed for user root May 14 23:57:21.892720 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:57:21.893060 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:57:21.911833 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:57:21.955454 augenrules[1845]: No rules May 14 23:57:21.957174 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:57:21.957390 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:57:21.958773 sudo[1822]: pam_unix(sudo:session): session closed for user root May 14 23:57:22.117013 sshd[1821]: Connection closed by 147.75.109.163 port 44286 May 14 23:57:22.117581 sshd-session[1819]: pam_unix(sshd:session): session closed for user core May 14 23:57:22.122290 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. May 14 23:57:22.124193 systemd[1]: sshd@5-91.99.8.230:22-147.75.109.163:44286.service: Deactivated successfully. May 14 23:57:22.126363 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:57:22.127844 systemd-logind[1468]: Removed session 6. May 14 23:57:22.291760 systemd[1]: Started sshd@6-91.99.8.230:22-147.75.109.163:44290.service - OpenSSH per-connection server daemon (147.75.109.163:44290). May 14 23:57:23.279738 sshd[1854]: Accepted publickey for core from 147.75.109.163 port 44290 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 14 23:57:23.281663 sshd-session[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:57:23.287004 systemd-logind[1468]: New session 7 of user core. May 14 23:57:23.298670 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:57:23.805497 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:57:23.805756 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:57:23.807199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 23:57:23.818355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:23.938069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:23.945834 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:57:24.031297 kubelet[1877]: E0514 23:57:24.031184 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:57:24.032929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:57:24.033072 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:57:24.033353 systemd[1]: kubelet.service: Consumed 146ms CPU time, 94M memory peak. May 14 23:57:24.170787 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:57:24.171108 (dockerd)[1890]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:57:24.402457 dockerd[1890]: time="2025-05-14T23:57:24.402149916Z" level=info msg="Starting up" May 14 23:57:24.491211 dockerd[1890]: time="2025-05-14T23:57:24.490572612Z" level=info msg="Loading containers: start." May 14 23:57:24.643437 kernel: Initializing XFRM netlink socket May 14 23:57:24.721334 systemd-networkd[1393]: docker0: Link UP May 14 23:57:24.764763 dockerd[1890]: time="2025-05-14T23:57:24.764490653Z" level=info msg="Loading containers: done." May 14 23:57:24.779936 dockerd[1890]: time="2025-05-14T23:57:24.779863286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:57:24.780099 dockerd[1890]: time="2025-05-14T23:57:24.779983130Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:57:24.780191 dockerd[1890]: time="2025-05-14T23:57:24.780155656Z" level=info msg="Daemon has completed initialization" May 14 23:57:24.812958 dockerd[1890]: time="2025-05-14T23:57:24.812900553Z" level=info msg="API listen on /run/docker.sock" May 14 23:57:24.813115 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:57:25.831524 containerd[1488]: time="2025-05-14T23:57:25.831477999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:57:26.483419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796178885.mount: Deactivated successfully. May 14 23:57:27.846484 containerd[1488]: time="2025-05-14T23:57:27.846301155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:27.847941 containerd[1488]: time="2025-05-14T23:57:27.847890610Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554700" May 14 23:57:27.848649 containerd[1488]: time="2025-05-14T23:57:27.848555353Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:27.853939 containerd[1488]: time="2025-05-14T23:57:27.853849176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:27.855056 containerd[1488]: time="2025-05-14T23:57:27.854841050Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.023317369s" May 14 23:57:27.855056 containerd[1488]: time="2025-05-14T23:57:27.854879651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 23:57:27.855869 containerd[1488]: time="2025-05-14T23:57:27.855639557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:57:29.701672 containerd[1488]: time="2025-05-14T23:57:29.701581765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:29.703525 containerd[1488]: time="2025-05-14T23:57:29.703149098Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458998" May 14 23:57:29.704467 containerd[1488]: time="2025-05-14T23:57:29.704399140Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:29.707971 containerd[1488]: time="2025-05-14T23:57:29.707893017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:29.709229 containerd[1488]: time="2025-05-14T23:57:29.709097658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.853414739s" May 14 23:57:29.709229 containerd[1488]: time="2025-05-14T23:57:29.709132939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 23:57:29.710032 containerd[1488]: time="2025-05-14T23:57:29.709822842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:57:31.052954 containerd[1488]: time="2025-05-14T23:57:31.052872912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:31.055099 containerd[1488]: time="2025-05-14T23:57:31.054508845Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125833" May 14 23:57:31.056424 containerd[1488]: time="2025-05-14T23:57:31.056324265Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:31.063437 containerd[1488]: time="2025-05-14T23:57:31.062208539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:31.065185 containerd[1488]: time="2025-05-14T23:57:31.065135715Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.355281632s" May 14 23:57:31.065185 containerd[1488]: time="2025-05-14T23:57:31.065178837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 23:57:31.066648 containerd[1488]: time="2025-05-14T23:57:31.066615124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:57:32.068229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417921252.mount: Deactivated successfully. May 14 23:57:32.382163 containerd[1488]: time="2025-05-14T23:57:32.382059999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:32.384162 containerd[1488]: time="2025-05-14T23:57:32.383678732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871943" May 14 23:57:32.385230 containerd[1488]: time="2025-05-14T23:57:32.385170060Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:32.387894 containerd[1488]: time="2025-05-14T23:57:32.387839387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:32.389738 containerd[1488]: time="2025-05-14T23:57:32.389560124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.322892958s" May 14 23:57:32.389738 containerd[1488]: time="2025-05-14T23:57:32.389627766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 23:57:32.390390 containerd[1488]: time="2025-05-14T23:57:32.390299588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:57:32.998652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951740638.mount: Deactivated successfully. May 14 23:57:33.641546 containerd[1488]: time="2025-05-14T23:57:33.641483744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:33.643368 containerd[1488]: time="2025-05-14T23:57:33.643316124Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" May 14 23:57:33.644027 containerd[1488]: time="2025-05-14T23:57:33.643663815Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:33.648178 containerd[1488]: time="2025-05-14T23:57:33.648096398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:33.649437 containerd[1488]: time="2025-05-14T23:57:33.649277076Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.258890126s" May 14 23:57:33.649437 containerd[1488]: time="2025-05-14T23:57:33.649314517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:57:33.649918 containerd[1488]: time="2025-05-14T23:57:33.649767132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:57:34.102570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 23:57:34.110792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:34.180551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467861269.mount: Deactivated successfully. May 14 23:57:34.195431 containerd[1488]: time="2025-05-14T23:57:34.194537511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:34.197646 containerd[1488]: time="2025-05-14T23:57:34.197587168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 14 23:57:34.198959 containerd[1488]: time="2025-05-14T23:57:34.198913491Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:34.201216 containerd[1488]: time="2025-05-14T23:57:34.200997277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:34.202528 containerd[1488]: time="2025-05-14T23:57:34.202191436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.397502ms" May 14 23:57:34.202528 containerd[1488]: time="2025-05-14T23:57:34.202226917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:57:34.203493 containerd[1488]: time="2025-05-14T23:57:34.203177667Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 23:57:34.211733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:34.217575 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:57:34.263939 kubelet[2204]: E0514 23:57:34.263876 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:57:34.266425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:57:34.266598 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:57:34.267172 systemd[1]: kubelet.service: Consumed 132ms CPU time, 94.7M memory peak. May 14 23:57:34.766516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276035247.mount: Deactivated successfully. May 14 23:57:37.440053 containerd[1488]: time="2025-05-14T23:57:37.439994177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:37.441385 containerd[1488]: time="2025-05-14T23:57:37.441329013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 14 23:57:37.442138 containerd[1488]: time="2025-05-14T23:57:37.441933349Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:37.446450 containerd[1488]: time="2025-05-14T23:57:37.445947896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:57:37.448129 containerd[1488]: time="2025-05-14T23:57:37.447990070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.244785522s" May 14 23:57:37.448129 containerd[1488]: time="2025-05-14T23:57:37.448028191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 23:57:42.254363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:42.254706 systemd[1]: kubelet.service: Consumed 132ms CPU time, 94.7M memory peak. May 14 23:57:42.263880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:42.294525 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-7.scope)... May 14 23:57:42.294677 systemd[1]: Reloading... May 14 23:57:42.407440 zram_generator::config[2336]: No configuration found. May 14 23:57:42.517533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:57:42.609481 systemd[1]: Reloading finished in 314 ms. May 14 23:57:42.654253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:42.664126 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:57:42.668586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:42.669562 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:57:42.670036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:42.670108 systemd[1]: kubelet.service: Consumed 89ms CPU time, 82.2M memory peak. May 14 23:57:42.684035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:42.779621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:42.782109 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:57:42.826308 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:57:42.826732 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:57:42.826775 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:57:42.826983 kubelet[2389]: I0514 23:57:42.826944 2389 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:57:43.432542 kubelet[2389]: I0514 23:57:43.432500 2389 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:57:43.432542 kubelet[2389]: I0514 23:57:43.432536 2389 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:57:43.432877 kubelet[2389]: I0514 23:57:43.432860 2389 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:57:43.458458 kubelet[2389]: E0514 23:57:43.457732 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.8.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:43.460276 kubelet[2389]: I0514 23:57:43.460245 2389 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:57:43.470259 kubelet[2389]: E0514 23:57:43.470220 2389 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:57:43.470463 kubelet[2389]: I0514 23:57:43.470447 2389 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:57:43.476136 kubelet[2389]: I0514 23:57:43.476097 2389 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:57:43.477637 kubelet[2389]: I0514 23:57:43.477601 2389 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:57:43.478070 kubelet[2389]: I0514 23:57:43.478019 2389 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:57:43.478543 kubelet[2389]: I0514 23:57:43.478208 2389 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-df83517ae5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:57:43.478937 kubelet[2389]: I0514 23:57:43.478911 2389 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:57:43.479724 kubelet[2389]: I0514 23:57:43.479040 2389 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:57:43.479724 kubelet[2389]: I0514 23:57:43.479321 2389 state_mem.go:36] "Initialized new in-memory state store" May 14 23:57:43.483443 kubelet[2389]: I0514 23:57:43.483391 2389 kubelet.go:408] "Attempting to sync node with API server" May 14 23:57:43.483632 kubelet[2389]: I0514 23:57:43.483608 2389 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:57:43.483804 kubelet[2389]: I0514 23:57:43.483783 2389 kubelet.go:314] "Adding apiserver pod source" May 14 23:57:43.484019 kubelet[2389]: I0514 23:57:43.483994 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:57:43.485269 kubelet[2389]: W0514 23:57:43.485192 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.8.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-df83517ae5&limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:43.485332 kubelet[2389]: E0514 23:57:43.485299 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.8.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-df83517ae5&limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:43.486564 kubelet[2389]: I0514 23:57:43.486548 2389 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:57:43.488457 kubelet[2389]: I0514 23:57:43.488436 2389 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:57:43.489486 kubelet[2389]: W0514 23:57:43.489470 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:57:43.490436 kubelet[2389]: I0514 23:57:43.490201 2389 server.go:1269] "Started kubelet" May 14 23:57:43.490436 kubelet[2389]: W0514 23:57:43.490318 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.8.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:43.490436 kubelet[2389]: E0514 23:57:43.490387 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.8.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:43.494623 kubelet[2389]: I0514 23:57:43.494584 2389 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:57:43.495313 kubelet[2389]: I0514 23:57:43.495264 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:57:43.495689 kubelet[2389]: I0514 23:57:43.495664 2389 server.go:460] "Adding debug handlers to kubelet server" May 14 23:57:43.495762 kubelet[2389]: I0514 23:57:43.495749 2389 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:57:43.497173 kubelet[2389]: E0514 23:57:43.496049 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.8.230:6443/api/v1/namespaces/default/events\": dial tcp 91.99.8.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-n-df83517ae5.183f8a2307126bae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-df83517ae5,UID:ci-4230-1-1-n-df83517ae5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-df83517ae5,},FirstTimestamp:2025-05-14 23:57:43.490177966 +0000 UTC m=+0.704394387,LastTimestamp:2025-05-14 23:57:43.490177966 +0000 UTC m=+0.704394387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-df83517ae5,}" May 14 23:57:43.499445 kubelet[2389]: I0514 23:57:43.499242 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:57:43.500537 kubelet[2389]: E0514 23:57:43.499862 2389 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:57:43.500537 kubelet[2389]: I0514 23:57:43.499999 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:57:43.502292 kubelet[2389]: E0514 23:57:43.501945 2389 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-df83517ae5\" not found" May 14 23:57:43.502292 kubelet[2389]: I0514 23:57:43.502101 2389 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:57:43.502292 kubelet[2389]: I0514 23:57:43.502287 2389 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:57:43.502433 kubelet[2389]: I0514 23:57:43.502337 2389 reconciler.go:26] "Reconciler: start to sync state" May 14 23:57:43.502883 kubelet[2389]: W0514 23:57:43.502660 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.8.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:43.502883 kubelet[2389]: E0514 23:57:43.502703 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.8.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:43.503915 kubelet[2389]: E0514 23:57:43.503831 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.8.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-df83517ae5?timeout=10s\": dial tcp 91.99.8.230:6443: connect: connection refused" interval="200ms" May 14 23:57:43.504379 kubelet[2389]: I0514 23:57:43.504185 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:57:43.507923 kubelet[2389]: I0514 23:57:43.507878 2389 factory.go:221] Registration of the containerd container factory successfully May 14 23:57:43.507923 kubelet[2389]: I0514 23:57:43.507904 2389 factory.go:221] Registration of the systemd container factory successfully May 14 23:57:43.522056 kubelet[2389]: I0514 23:57:43.522000 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:57:43.523025 kubelet[2389]: I0514 23:57:43.522995 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:57:43.523025 kubelet[2389]: I0514 23:57:43.523021 2389 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:57:43.523140 kubelet[2389]: I0514 23:57:43.523040 2389 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:57:43.523140 kubelet[2389]: E0514 23:57:43.523073 2389 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:57:43.527554 kubelet[2389]: W0514 23:57:43.527317 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.8.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:43.527554 kubelet[2389]: E0514 23:57:43.527361 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.8.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:43.527554 kubelet[2389]: I0514 23:57:43.527510 2389 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:57:43.527554 kubelet[2389]: I0514 23:57:43.527520 2389 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:57:43.527554 kubelet[2389]: I0514 23:57:43.527537 2389 state_mem.go:36] "Initialized new in-memory state store" May 14 23:57:43.529759 kubelet[2389]: I0514 23:57:43.529731 2389 policy_none.go:49] "None policy: Start" May 14 23:57:43.530690 kubelet[2389]: I0514 23:57:43.530400 2389 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:57:43.530690 kubelet[2389]: I0514 23:57:43.530434 2389 state_mem.go:35] "Initializing new in-memory state store" May 14 23:57:43.537793 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:57:43.558221 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:57:43.562500 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:57:43.573323 kubelet[2389]: I0514 23:57:43.573266 2389 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:57:43.574163 kubelet[2389]: I0514 23:57:43.573818 2389 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:57:43.574163 kubelet[2389]: I0514 23:57:43.573849 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:57:43.574670 kubelet[2389]: I0514 23:57:43.574327 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:57:43.578257 kubelet[2389]: E0514 23:57:43.578201 2389 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-n-df83517ae5\" not found" May 14 23:57:43.637972 systemd[1]: Created slice kubepods-burstable-pod354916063c9009b90d3f4fbf6eaec953.slice - libcontainer container kubepods-burstable-pod354916063c9009b90d3f4fbf6eaec953.slice. May 14 23:57:43.661225 systemd[1]: Created slice kubepods-burstable-pod461c40bb89dddebbdc6a05650afaf8c6.slice - libcontainer container kubepods-burstable-pod461c40bb89dddebbdc6a05650afaf8c6.slice. May 14 23:57:43.665738 systemd[1]: Created slice kubepods-burstable-pode596eca9d16e2c91f8532aec705df5e8.slice - libcontainer container kubepods-burstable-pode596eca9d16e2c91f8532aec705df5e8.slice. May 14 23:57:43.675989 kubelet[2389]: I0514 23:57:43.675917 2389 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:43.676497 kubelet[2389]: E0514 23:57:43.676429 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.8.230:6443/api/v1/nodes\": dial tcp 91.99.8.230:6443: connect: connection refused" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:43.704684 kubelet[2389]: I0514 23:57:43.704022 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e596eca9d16e2c91f8532aec705df5e8-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-df83517ae5\" (UID: \"e596eca9d16e2c91f8532aec705df5e8\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.704684 kubelet[2389]: I0514 23:57:43.704087 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.704684 kubelet[2389]: I0514 23:57:43.704178 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.704684 kubelet[2389]: I0514 23:57:43.704220 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.704684 kubelet[2389]: I0514 23:57:43.704260 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.705026 kubelet[2389]: I0514 23:57:43.704292 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.705026 kubelet[2389]: I0514 23:57:43.704328 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.705026 kubelet[2389]: I0514 23:57:43.704363 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.705026 kubelet[2389]: I0514 23:57:43.704394 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:43.705026 kubelet[2389]: E0514 23:57:43.704393 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.8.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-df83517ae5?timeout=10s\": dial tcp 91.99.8.230:6443: connect: connection refused" interval="400ms" May 14 23:57:43.880144 kubelet[2389]: I0514 23:57:43.880073 2389 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:43.880639 kubelet[2389]: E0514 23:57:43.880438 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.8.230:6443/api/v1/nodes\": dial tcp 91.99.8.230:6443: connect: connection refused" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:43.955826 containerd[1488]: time="2025-05-14T23:57:43.955653614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-df83517ae5,Uid:354916063c9009b90d3f4fbf6eaec953,Namespace:kube-system,Attempt:0,}" May 14 23:57:43.965680 containerd[1488]: time="2025-05-14T23:57:43.965629853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-df83517ae5,Uid:461c40bb89dddebbdc6a05650afaf8c6,Namespace:kube-system,Attempt:0,}" May 14 23:57:43.969599 containerd[1488]: time="2025-05-14T23:57:43.969562181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-df83517ae5,Uid:e596eca9d16e2c91f8532aec705df5e8,Namespace:kube-system,Attempt:0,}" May 14 23:57:44.105523 kubelet[2389]: E0514 23:57:44.105447 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.8.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-df83517ae5?timeout=10s\": dial tcp 91.99.8.230:6443: connect: connection refused" interval="800ms" May 14 23:57:44.283592 kubelet[2389]: I0514 23:57:44.283058 2389 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:44.283592 kubelet[2389]: E0514 23:57:44.283432 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.8.230:6443/api/v1/nodes\": dial tcp 91.99.8.230:6443: connect: connection refused" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:44.508672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389867676.mount: Deactivated successfully. May 14 23:57:44.515606 containerd[1488]: time="2025-05-14T23:57:44.515374844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:57:44.517477 containerd[1488]: time="2025-05-14T23:57:44.517425989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 14 23:57:44.520479 containerd[1488]: time="2025-05-14T23:57:44.520345088Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:57:44.521456 containerd[1488]: time="2025-05-14T23:57:44.521287802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:57:44.521815 containerd[1488]: time="2025-05-14T23:57:44.521778878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:57:44.524904 containerd[1488]: time="2025-05-14T23:57:44.524837217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:57:44.525434 containerd[1488]: time="2025-05-14T23:57:44.525275173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:57:44.526809 containerd[1488]: time="2025-05-14T23:57:44.526705923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:57:44.530002 containerd[1488]: time="2025-05-14T23:57:44.529330185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.551972ms" May 14 23:57:44.531452 containerd[1488]: time="2025-05-14T23:57:44.531169012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.240881ms" May 14 23:57:44.535279 containerd[1488]: time="2025-05-14T23:57:44.535072264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.434523ms" May 14 23:57:44.600749 kubelet[2389]: W0514 23:57:44.599693 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.8.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:44.600749 kubelet[2389]: E0514 23:57:44.599780 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.8.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:44.647438 containerd[1488]: time="2025-05-14T23:57:44.647280989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:44.647438 containerd[1488]: time="2025-05-14T23:57:44.647432948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:44.647882 containerd[1488]: time="2025-05-14T23:57:44.647493267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.648325 containerd[1488]: time="2025-05-14T23:57:44.647619026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.648585 containerd[1488]: time="2025-05-14T23:57:44.648491980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:44.648731 containerd[1488]: time="2025-05-14T23:57:44.648685979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.649524893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.651394400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.652030915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.652491072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.652560351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.653132 containerd[1488]: time="2025-05-14T23:57:44.652984988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:44.675754 systemd[1]: Started cri-containerd-09a2126837836804c29890fbdfd88f7c25fd96d1bb36ba7deff0ae1549f921ab.scope - libcontainer container 09a2126837836804c29890fbdfd88f7c25fd96d1bb36ba7deff0ae1549f921ab. May 14 23:57:44.680430 systemd[1]: Started cri-containerd-4c40d38c643b1e0dd45a2ac0c829eb1bd28113e3de5a1a4d27a2b98718fb1e42.scope - libcontainer container 4c40d38c643b1e0dd45a2ac0c829eb1bd28113e3de5a1a4d27a2b98718fb1e42. May 14 23:57:44.686583 systemd[1]: Started cri-containerd-7c85a6a2a40312d260e967225f3a06004214f61dd657ead0428c89f7e5c4f9ec.scope - libcontainer container 7c85a6a2a40312d260e967225f3a06004214f61dd657ead0428c89f7e5c4f9ec. May 14 23:57:44.736268 containerd[1488]: time="2025-05-14T23:57:44.736218559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-df83517ae5,Uid:354916063c9009b90d3f4fbf6eaec953,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c85a6a2a40312d260e967225f3a06004214f61dd657ead0428c89f7e5c4f9ec\"" May 14 23:57:44.741345 containerd[1488]: time="2025-05-14T23:57:44.741309482Z" level=info msg="CreateContainer within sandbox \"7c85a6a2a40312d260e967225f3a06004214f61dd657ead0428c89f7e5c4f9ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:57:44.746524 containerd[1488]: time="2025-05-14T23:57:44.746479246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-df83517ae5,Uid:461c40bb89dddebbdc6a05650afaf8c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c40d38c643b1e0dd45a2ac0c829eb1bd28113e3de5a1a4d27a2b98718fb1e42\"" May 14 23:57:44.747458 containerd[1488]: time="2025-05-14T23:57:44.747394279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-df83517ae5,Uid:e596eca9d16e2c91f8532aec705df5e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a2126837836804c29890fbdfd88f7c25fd96d1bb36ba7deff0ae1549f921ab\"" May 14 23:57:44.751944 containerd[1488]: time="2025-05-14T23:57:44.751915887Z" level=info msg="CreateContainer within sandbox \"4c40d38c643b1e0dd45a2ac0c829eb1bd28113e3de5a1a4d27a2b98718fb1e42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:57:44.755090 containerd[1488]: time="2025-05-14T23:57:44.755057905Z" level=info msg="CreateContainer within sandbox \"09a2126837836804c29890fbdfd88f7c25fd96d1bb36ba7deff0ae1549f921ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:57:44.765239 containerd[1488]: time="2025-05-14T23:57:44.765185513Z" level=info msg="CreateContainer within sandbox \"7c85a6a2a40312d260e967225f3a06004214f61dd657ead0428c89f7e5c4f9ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a107fb86b5088c26b9966ae0da10322feed13a5710ff3efd6fef95d21753f687\"" May 14 23:57:44.766612 containerd[1488]: time="2025-05-14T23:57:44.766585303Z" level=info msg="StartContainer for \"a107fb86b5088c26b9966ae0da10322feed13a5710ff3efd6fef95d21753f687\"" May 14 23:57:44.775379 containerd[1488]: time="2025-05-14T23:57:44.775329281Z" level=info msg="CreateContainer within sandbox \"4c40d38c643b1e0dd45a2ac0c829eb1bd28113e3de5a1a4d27a2b98718fb1e42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3886ec6402d249f92ea1b84a4a7d863d3f35ed85ae6379068edc9ffd12302c0e\"" May 14 23:57:44.776954 containerd[1488]: time="2025-05-14T23:57:44.775856958Z" level=info msg="StartContainer for \"3886ec6402d249f92ea1b84a4a7d863d3f35ed85ae6379068edc9ffd12302c0e\"" May 14 23:57:44.779168 containerd[1488]: time="2025-05-14T23:57:44.779086095Z" level=info msg="CreateContainer within sandbox \"09a2126837836804c29890fbdfd88f7c25fd96d1bb36ba7deff0ae1549f921ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a50147abbbb2e29fd502f7da87d9732fead2febac9aad79143d03ee84546b5c8\"" May 14 23:57:44.779776 containerd[1488]: time="2025-05-14T23:57:44.779746370Z" level=info msg="StartContainer for \"a50147abbbb2e29fd502f7da87d9732fead2febac9aad79143d03ee84546b5c8\"" May 14 23:57:44.804576 systemd[1]: Started cri-containerd-a107fb86b5088c26b9966ae0da10322feed13a5710ff3efd6fef95d21753f687.scope - libcontainer container a107fb86b5088c26b9966ae0da10322feed13a5710ff3efd6fef95d21753f687. May 14 23:57:44.813834 systemd[1]: Started cri-containerd-3886ec6402d249f92ea1b84a4a7d863d3f35ed85ae6379068edc9ffd12302c0e.scope - libcontainer container 3886ec6402d249f92ea1b84a4a7d863d3f35ed85ae6379068edc9ffd12302c0e. May 14 23:57:44.840569 systemd[1]: Started cri-containerd-a50147abbbb2e29fd502f7da87d9732fead2febac9aad79143d03ee84546b5c8.scope - libcontainer container a50147abbbb2e29fd502f7da87d9732fead2febac9aad79143d03ee84546b5c8. May 14 23:57:44.870640 containerd[1488]: time="2025-05-14T23:57:44.869393495Z" level=info msg="StartContainer for \"a107fb86b5088c26b9966ae0da10322feed13a5710ff3efd6fef95d21753f687\" returns successfully" May 14 23:57:44.899294 containerd[1488]: time="2025-05-14T23:57:44.899246803Z" level=info msg="StartContainer for \"3886ec6402d249f92ea1b84a4a7d863d3f35ed85ae6379068edc9ffd12302c0e\" returns successfully" May 14 23:57:44.906181 kubelet[2389]: E0514 23:57:44.906135 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.8.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-df83517ae5?timeout=10s\": dial tcp 91.99.8.230:6443: connect: connection refused" interval="1.6s" May 14 23:57:44.911293 kubelet[2389]: W0514 23:57:44.911081 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.8.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-df83517ae5&limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:44.911293 kubelet[2389]: E0514 23:57:44.911233 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.8.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-df83517ae5&limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:44.913761 containerd[1488]: time="2025-05-14T23:57:44.913386783Z" level=info msg="StartContainer for \"a50147abbbb2e29fd502f7da87d9732fead2febac9aad79143d03ee84546b5c8\" returns successfully" May 14 23:57:44.983739 kubelet[2389]: W0514 23:57:44.983701 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.8.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.8.230:6443: connect: connection refused May 14 23:57:44.984169 kubelet[2389]: E0514 23:57:44.984129 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.8.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.8.230:6443: connect: connection refused" logger="UnhandledError" May 14 23:57:45.086315 kubelet[2389]: I0514 23:57:45.085903 2389 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:46.991763 kubelet[2389]: E0514 23:57:46.991699 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-n-df83517ae5\" not found" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:47.093193 kubelet[2389]: I0514 23:57:47.092867 2389 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:47.496285 kubelet[2389]: I0514 23:57:47.496171 2389 apiserver.go:52] "Watching apiserver" May 14 23:57:47.503128 kubelet[2389]: I0514 23:57:47.502944 2389 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:57:49.238547 systemd[1]: Reload requested from client PID 2665 ('systemctl') (unit session-7.scope)... May 14 23:57:49.238578 systemd[1]: Reloading... May 14 23:57:49.345494 zram_generator::config[2708]: No configuration found. May 14 23:57:49.439662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:57:49.545779 systemd[1]: Reloading finished in 306 ms. May 14 23:57:49.566960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:49.578285 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:57:49.578692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:49.578775 systemd[1]: kubelet.service: Consumed 1.098s CPU time, 116.9M memory peak. May 14 23:57:49.586234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:57:49.699676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:57:49.702634 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:57:49.754924 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:57:49.754924 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:57:49.754924 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:57:49.755314 kubelet[2755]: I0514 23:57:49.754978 2755 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:57:49.763775 kubelet[2755]: I0514 23:57:49.763448 2755 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:57:49.763775 kubelet[2755]: I0514 23:57:49.763480 2755 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:57:49.764894 kubelet[2755]: I0514 23:57:49.763881 2755 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:57:49.768074 kubelet[2755]: I0514 23:57:49.767536 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:57:49.776286 kubelet[2755]: I0514 23:57:49.775976 2755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:57:49.780428 kubelet[2755]: E0514 23:57:49.780035 2755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:57:49.780428 kubelet[2755]: I0514 23:57:49.780107 2755 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:57:49.784198 kubelet[2755]: I0514 23:57:49.784162 2755 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:57:49.784315 kubelet[2755]: I0514 23:57:49.784286 2755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:57:49.784423 kubelet[2755]: I0514 23:57:49.784373 2755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:57:49.784620 kubelet[2755]: I0514 23:57:49.784411 2755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-df83517ae5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:57:49.784620 kubelet[2755]: I0514 23:57:49.784613 2755 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:57:49.784781 kubelet[2755]: I0514 23:57:49.784629 2755 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:57:49.784781 kubelet[2755]: I0514 23:57:49.784662 2755 state_mem.go:36] "Initialized new in-memory state store" May 14 23:57:49.784781 kubelet[2755]: I0514 23:57:49.784769 2755 kubelet.go:408] "Attempting to sync node with API server" May 14 23:57:49.784887 kubelet[2755]: I0514 23:57:49.784785 2755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:57:49.786568 kubelet[2755]: I0514 23:57:49.784807 2755 kubelet.go:314] "Adding apiserver pod source" May 14 23:57:49.786568 kubelet[2755]: I0514 23:57:49.786448 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:57:49.790556 kubelet[2755]: I0514 23:57:49.790523 2755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:57:49.790999 kubelet[2755]: I0514 23:57:49.790974 2755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:57:49.791380 kubelet[2755]: I0514 23:57:49.791361 2755 server.go:1269] "Started kubelet" May 14 23:57:49.794158 kubelet[2755]: I0514 23:57:49.793731 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:57:49.794158 kubelet[2755]: I0514 23:57:49.793976 2755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:57:49.794158 kubelet[2755]: I0514 23:57:49.794026 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:57:49.795507 kubelet[2755]: I0514 23:57:49.795485 2755 server.go:460] "Adding debug handlers to kubelet server" May 14 23:57:49.799892 kubelet[2755]: I0514 23:57:49.797188 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:57:49.801424 kubelet[2755]: I0514 23:57:49.800965 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:57:49.807413 kubelet[2755]: I0514 23:57:49.805419 2755 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:57:49.809632 kubelet[2755]: I0514 23:57:49.809572 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:57:49.809847 kubelet[2755]: I0514 23:57:49.809831 2755 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:57:49.809907 kubelet[2755]: E0514 23:57:49.805483 2755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-df83517ae5\" not found" May 14 23:57:49.810132 kubelet[2755]: I0514 23:57:49.810118 2755 reconciler.go:26] "Reconciler: start to sync state" May 14 23:57:49.816429 kubelet[2755]: I0514 23:57:49.814457 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:57:49.816565 kubelet[2755]: I0514 23:57:49.816552 2755 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:57:49.816646 kubelet[2755]: I0514 23:57:49.816636 2755 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:57:49.816752 kubelet[2755]: E0514 23:57:49.816726 2755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:57:49.821987 kubelet[2755]: I0514 23:57:49.819833 2755 factory.go:221] Registration of the systemd container factory successfully May 14 23:57:49.821987 kubelet[2755]: I0514 23:57:49.819954 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:57:49.829822 kubelet[2755]: E0514 23:57:49.829646 2755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:57:49.847632 kubelet[2755]: I0514 23:57:49.847598 2755 factory.go:221] Registration of the containerd container factory successfully May 14 23:57:49.900195 kubelet[2755]: I0514 23:57:49.900136 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:57:49.900195 kubelet[2755]: I0514 23:57:49.900166 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:57:49.900195 kubelet[2755]: I0514 23:57:49.900190 2755 state_mem.go:36] "Initialized new in-memory state store" May 14 23:57:49.900366 kubelet[2755]: I0514 23:57:49.900343 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:57:49.900400 kubelet[2755]: I0514 23:57:49.900361 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:57:49.900400 kubelet[2755]: I0514 23:57:49.900378 2755 policy_none.go:49] "None policy: Start" May 14 23:57:49.900982 kubelet[2755]: I0514 23:57:49.900965 2755 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:57:49.901043 kubelet[2755]: I0514 23:57:49.900989 2755 state_mem.go:35] "Initializing new in-memory state store" May 14 23:57:49.901188 kubelet[2755]: I0514 23:57:49.901172 2755 state_mem.go:75] "Updated machine memory state" May 14 23:57:49.905529 kubelet[2755]: I0514 23:57:49.905502 2755 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:57:49.905698 kubelet[2755]: I0514 23:57:49.905661 2755 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:57:49.905698 kubelet[2755]: I0514 23:57:49.905672 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:57:49.906734 kubelet[2755]: I0514 23:57:49.906656 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:57:50.009711 kubelet[2755]: I0514 23:57:50.009661 2755 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013313 kubelet[2755]: I0514 23:57:50.013282 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013743 kubelet[2755]: I0514 23:57:50.013496 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013743 kubelet[2755]: I0514 23:57:50.013532 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e596eca9d16e2c91f8532aec705df5e8-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-df83517ae5\" (UID: \"e596eca9d16e2c91f8532aec705df5e8\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013743 kubelet[2755]: I0514 23:57:50.013548 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013743 kubelet[2755]: I0514 23:57:50.013562 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013743 kubelet[2755]: I0514 23:57:50.013578 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/354916063c9009b90d3f4fbf6eaec953-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-df83517ae5\" (UID: \"354916063c9009b90d3f4fbf6eaec953\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013940 kubelet[2755]: I0514 23:57:50.013616 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013940 kubelet[2755]: I0514 23:57:50.013671 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.013940 kubelet[2755]: I0514 23:57:50.013691 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/461c40bb89dddebbdc6a05650afaf8c6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-df83517ae5\" (UID: \"461c40bb89dddebbdc6a05650afaf8c6\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" May 14 23:57:50.022824 kubelet[2755]: I0514 23:57:50.022786 2755 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:50.022986 kubelet[2755]: I0514 23:57:50.022895 2755 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-1-n-df83517ae5" May 14 23:57:50.231966 sudo[2786]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:57:50.232788 sudo[2786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:57:50.728135 sudo[2786]: pam_unix(sudo:session): session closed for user root May 14 23:57:50.789033 kubelet[2755]: I0514 23:57:50.787906 2755 apiserver.go:52] "Watching apiserver" May 14 23:57:50.810638 kubelet[2755]: I0514 23:57:50.810521 2755 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:57:50.922292 kubelet[2755]: I0514 23:57:50.920770 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-n-df83517ae5" podStartSLOduration=1.920651698 podStartE2EDuration="1.920651698s" podCreationTimestamp="2025-05-14 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:57:50.920327138 +0000 UTC m=+1.211347848" watchObservedRunningTime="2025-05-14 23:57:50.920651698 +0000 UTC m=+1.211672448" May 14 23:57:50.922292 kubelet[2755]: I0514 23:57:50.921424 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-df83517ae5" podStartSLOduration=1.921332176 podStartE2EDuration="1.921332176s" podCreationTimestamp="2025-05-14 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:57:50.907065963 +0000 UTC m=+1.198086673" watchObservedRunningTime="2025-05-14 23:57:50.921332176 +0000 UTC m=+1.212352966" May 14 23:57:50.949791 kubelet[2755]: I0514 23:57:50.949604 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-n-df83517ae5" podStartSLOduration=1.949574684 podStartE2EDuration="1.949574684s" podCreationTimestamp="2025-05-14 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:57:50.93551707 +0000 UTC m=+1.226537820" watchObservedRunningTime="2025-05-14 23:57:50.949574684 +0000 UTC m=+1.240595474" May 14 23:57:52.849764 sudo[1857]: pam_unix(sudo:session): session closed for user root May 14 23:57:53.009134 sshd[1856]: Connection closed by 147.75.109.163 port 44290 May 14 23:57:53.010143 sshd-session[1854]: pam_unix(sshd:session): session closed for user core May 14 23:57:53.016890 systemd[1]: sshd@6-91.99.8.230:22-147.75.109.163:44290.service: Deactivated successfully. May 14 23:57:53.019672 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:57:53.020023 systemd[1]: session-7.scope: Consumed 7.020s CPU time, 260.4M memory peak. May 14 23:57:53.021186 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. May 14 23:57:53.024018 systemd-logind[1468]: Removed session 7. May 14 23:57:55.033668 kubelet[2755]: I0514 23:57:55.033621 2755 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:57:55.035503 containerd[1488]: time="2025-05-14T23:57:55.034268546Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:57:55.035821 kubelet[2755]: I0514 23:57:55.034526 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:57:55.719891 systemd[1]: Created slice kubepods-besteffort-pod9d5a6bab_06a6_47b7_891f_7f995ed26f36.slice - libcontainer container kubepods-besteffort-pod9d5a6bab_06a6_47b7_891f_7f995ed26f36.slice. May 14 23:57:55.734822 systemd[1]: Created slice kubepods-burstable-pod3d5e917e_4836_47e0_9b1f_de5afb939f13.slice - libcontainer container kubepods-burstable-pod3d5e917e_4836_47e0_9b1f_de5afb939f13.slice. May 14 23:57:55.747689 kubelet[2755]: I0514 23:57:55.747654 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-bpf-maps\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748434 kubelet[2755]: I0514 23:57:55.747900 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-cgroup\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748434 kubelet[2755]: I0514 23:57:55.747930 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d5a6bab-06a6-47b7-891f-7f995ed26f36-xtables-lock\") pod \"kube-proxy-gsntt\" (UID: \"9d5a6bab-06a6-47b7-891f-7f995ed26f36\") " pod="kube-system/kube-proxy-gsntt" May 14 23:57:55.748434 kubelet[2755]: I0514 23:57:55.747947 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d5a6bab-06a6-47b7-891f-7f995ed26f36-lib-modules\") pod \"kube-proxy-gsntt\" (UID: \"9d5a6bab-06a6-47b7-891f-7f995ed26f36\") " pod="kube-system/kube-proxy-gsntt" May 14 23:57:55.748434 kubelet[2755]: I0514 23:57:55.747962 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-config-path\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748434 kubelet[2755]: I0514 23:57:55.747999 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-hubble-tls\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748602 kubelet[2755]: I0514 23:57:55.748119 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmf4w\" (UniqueName: \"kubernetes.io/projected/9d5a6bab-06a6-47b7-891f-7f995ed26f36-kube-api-access-qmf4w\") pod \"kube-proxy-gsntt\" (UID: \"9d5a6bab-06a6-47b7-891f-7f995ed26f36\") " pod="kube-system/kube-proxy-gsntt" May 14 23:57:55.748602 kubelet[2755]: I0514 23:57:55.748156 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-run\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748602 kubelet[2755]: I0514 23:57:55.748174 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-etc-cni-netd\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748602 kubelet[2755]: I0514 23:57:55.748195 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-kernel\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748602 kubelet[2755]: I0514 23:57:55.748224 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xv7r\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-kube-api-access-6xv7r\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748239 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d5a6bab-06a6-47b7-891f-7f995ed26f36-kube-proxy\") pod \"kube-proxy-gsntt\" (UID: \"9d5a6bab-06a6-47b7-891f-7f995ed26f36\") " pod="kube-system/kube-proxy-gsntt" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748254 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-hostproc\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748270 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cni-path\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748284 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-lib-modules\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748298 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5e917e-4836-47e0-9b1f-de5afb939f13-clustermesh-secrets\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748710 kubelet[2755]: I0514 23:57:55.748312 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-net\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:55.748909 kubelet[2755]: I0514 23:57:55.748331 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-xtables-lock\") pod \"cilium-zkgh7\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " pod="kube-system/cilium-zkgh7" May 14 23:57:56.006286 systemd[1]: Created slice kubepods-besteffort-podc6a9906e_b329_43ff_8780_91ead501c379.slice - libcontainer container kubepods-besteffort-podc6a9906e_b329_43ff_8780_91ead501c379.slice. May 14 23:57:56.033428 containerd[1488]: time="2025-05-14T23:57:56.033001063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsntt,Uid:9d5a6bab-06a6-47b7-891f-7f995ed26f36,Namespace:kube-system,Attempt:0,}" May 14 23:57:56.041912 containerd[1488]: time="2025-05-14T23:57:56.041764525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkgh7,Uid:3d5e917e-4836-47e0-9b1f-de5afb939f13,Namespace:kube-system,Attempt:0,}" May 14 23:57:56.053325 kubelet[2755]: I0514 23:57:56.053213 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a9906e-b329-43ff-8780-91ead501c379-cilium-config-path\") pod \"cilium-operator-5d85765b45-m8rlh\" (UID: \"c6a9906e-b329-43ff-8780-91ead501c379\") " pod="kube-system/cilium-operator-5d85765b45-m8rlh" May 14 23:57:56.054255 kubelet[2755]: I0514 23:57:56.053536 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtjl\" (UniqueName: \"kubernetes.io/projected/c6a9906e-b329-43ff-8780-91ead501c379-kube-api-access-dxtjl\") pod \"cilium-operator-5d85765b45-m8rlh\" (UID: \"c6a9906e-b329-43ff-8780-91ead501c379\") " pod="kube-system/cilium-operator-5d85765b45-m8rlh" May 14 23:57:56.069168 containerd[1488]: time="2025-05-14T23:57:56.068575871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:56.069168 containerd[1488]: time="2025-05-14T23:57:56.068644551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:56.069579 containerd[1488]: time="2025-05-14T23:57:56.069283393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.069579 containerd[1488]: time="2025-05-14T23:57:56.069426393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.076199 containerd[1488]: time="2025-05-14T23:57:56.076099290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:56.076340 containerd[1488]: time="2025-05-14T23:57:56.076224410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:56.076651 containerd[1488]: time="2025-05-14T23:57:56.076608571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.077659 containerd[1488]: time="2025-05-14T23:57:56.077552534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.091618 systemd[1]: Started cri-containerd-877391b521470da02ddc2e7b54417b87329f9e2112c26c4b3f8d62ca1784a3b1.scope - libcontainer container 877391b521470da02ddc2e7b54417b87329f9e2112c26c4b3f8d62ca1784a3b1. May 14 23:57:56.100081 systemd[1]: Started cri-containerd-7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8.scope - libcontainer container 7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8. May 14 23:57:56.123284 containerd[1488]: time="2025-05-14T23:57:56.123236447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gsntt,Uid:9d5a6bab-06a6-47b7-891f-7f995ed26f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"877391b521470da02ddc2e7b54417b87329f9e2112c26c4b3f8d62ca1784a3b1\"" May 14 23:57:56.127284 containerd[1488]: time="2025-05-14T23:57:56.127241457Z" level=info msg="CreateContainer within sandbox \"877391b521470da02ddc2e7b54417b87329f9e2112c26c4b3f8d62ca1784a3b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:57:56.143548 containerd[1488]: time="2025-05-14T23:57:56.143378937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkgh7,Uid:3d5e917e-4836-47e0-9b1f-de5afb939f13,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\"" May 14 23:57:56.147238 containerd[1488]: time="2025-05-14T23:57:56.147198227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:57:56.158798 containerd[1488]: time="2025-05-14T23:57:56.158752175Z" level=info msg="CreateContainer within sandbox \"877391b521470da02ddc2e7b54417b87329f9e2112c26c4b3f8d62ca1784a3b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"957df5cbadd37f91d8d5e837baeba353a75622bad12e10067d764254d0561306\"" May 14 23:57:56.162348 containerd[1488]: time="2025-05-14T23:57:56.162281904Z" level=info msg="StartContainer for \"957df5cbadd37f91d8d5e837baeba353a75622bad12e10067d764254d0561306\"" May 14 23:57:56.197662 systemd[1]: Started cri-containerd-957df5cbadd37f91d8d5e837baeba353a75622bad12e10067d764254d0561306.scope - libcontainer container 957df5cbadd37f91d8d5e837baeba353a75622bad12e10067d764254d0561306. May 14 23:57:56.233381 containerd[1488]: time="2025-05-14T23:57:56.233309760Z" level=info msg="StartContainer for \"957df5cbadd37f91d8d5e837baeba353a75622bad12e10067d764254d0561306\" returns successfully" May 14 23:57:56.311088 containerd[1488]: time="2025-05-14T23:57:56.310882113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m8rlh,Uid:c6a9906e-b329-43ff-8780-91ead501c379,Namespace:kube-system,Attempt:0,}" May 14 23:57:56.341032 containerd[1488]: time="2025-05-14T23:57:56.340458546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:57:56.341032 containerd[1488]: time="2025-05-14T23:57:56.340522987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:57:56.341032 containerd[1488]: time="2025-05-14T23:57:56.340534427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.341032 containerd[1488]: time="2025-05-14T23:57:56.340620627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:57:56.364822 systemd[1]: Started cri-containerd-9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12.scope - libcontainer container 9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12. May 14 23:57:56.414910 containerd[1488]: time="2025-05-14T23:57:56.414551290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m8rlh,Uid:c6a9906e-b329-43ff-8780-91ead501c379,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\"" May 14 23:57:58.010663 kubelet[2755]: I0514 23:57:58.010587 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gsntt" podStartSLOduration=3.009959589 podStartE2EDuration="3.009959589s" podCreationTimestamp="2025-05-14 23:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:57:56.90973592 +0000 UTC m=+7.200756670" watchObservedRunningTime="2025-05-14 23:57:58.009959589 +0000 UTC m=+8.300980299" May 14 23:57:59.733867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510759121.mount: Deactivated successfully. May 14 23:58:01.093579 containerd[1488]: time="2025-05-14T23:58:01.092347236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:58:01.093579 containerd[1488]: time="2025-05-14T23:58:01.093513522Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:58:01.094282 containerd[1488]: time="2025-05-14T23:58:01.094255726Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:58:01.096616 containerd[1488]: time="2025-05-14T23:58:01.096577579Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.949333512s" May 14 23:58:01.096752 containerd[1488]: time="2025-05-14T23:58:01.096736540Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:58:01.098810 containerd[1488]: time="2025-05-14T23:58:01.098580430Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:58:01.100738 containerd[1488]: time="2025-05-14T23:58:01.100597041Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:58:01.117818 containerd[1488]: time="2025-05-14T23:58:01.117775696Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\"" May 14 23:58:01.118509 containerd[1488]: time="2025-05-14T23:58:01.118486380Z" level=info msg="StartContainer for \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\"" May 14 23:58:01.152858 systemd[1]: Started cri-containerd-08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1.scope - libcontainer container 08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1. May 14 23:58:01.195748 containerd[1488]: time="2025-05-14T23:58:01.195652645Z" level=info msg="StartContainer for \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\" returns successfully" May 14 23:58:01.212755 systemd[1]: cri-containerd-08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1.scope: Deactivated successfully. May 14 23:58:01.342935 containerd[1488]: time="2025-05-14T23:58:01.342862817Z" level=info msg="shim disconnected" id=08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1 namespace=k8s.io May 14 23:58:01.343556 containerd[1488]: time="2025-05-14T23:58:01.343208619Z" level=warning msg="cleaning up after shim disconnected" id=08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1 namespace=k8s.io May 14 23:58:01.343556 containerd[1488]: time="2025-05-14T23:58:01.343231899Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:01.917055 containerd[1488]: time="2025-05-14T23:58:01.916996622Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:58:01.940709 containerd[1488]: time="2025-05-14T23:58:01.940662552Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\"" May 14 23:58:01.942384 containerd[1488]: time="2025-05-14T23:58:01.941441636Z" level=info msg="StartContainer for \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\"" May 14 23:58:01.968574 systemd[1]: Started cri-containerd-b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c.scope - libcontainer container b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c. May 14 23:58:01.997231 containerd[1488]: time="2025-05-14T23:58:01.996524660Z" level=info msg="StartContainer for \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\" returns successfully" May 14 23:58:02.009587 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:58:02.010132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:58:02.010510 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:58:02.017727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:58:02.017924 systemd[1]: cri-containerd-b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c.scope: Deactivated successfully. May 14 23:58:02.044503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:58:02.056388 containerd[1488]: time="2025-05-14T23:58:02.056309420Z" level=info msg="shim disconnected" id=b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c namespace=k8s.io May 14 23:58:02.056388 containerd[1488]: time="2025-05-14T23:58:02.056372301Z" level=warning msg="cleaning up after shim disconnected" id=b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c namespace=k8s.io May 14 23:58:02.056388 containerd[1488]: time="2025-05-14T23:58:02.056384261Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:02.070075 containerd[1488]: time="2025-05-14T23:58:02.070021984Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:58:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:58:02.114567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1-rootfs.mount: Deactivated successfully. May 14 23:58:02.923150 containerd[1488]: time="2025-05-14T23:58:02.923101556Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:58:02.944395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574336401.mount: Deactivated successfully. May 14 23:58:02.951928 containerd[1488]: time="2025-05-14T23:58:02.951886250Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\"" May 14 23:58:02.952829 containerd[1488]: time="2025-05-14T23:58:02.952798416Z" level=info msg="StartContainer for \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\"" May 14 23:58:02.994667 systemd[1]: Started cri-containerd-81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6.scope - libcontainer container 81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6. May 14 23:58:03.029102 containerd[1488]: time="2025-05-14T23:58:03.029049053Z" level=info msg="StartContainer for \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\" returns successfully" May 14 23:58:03.037195 systemd[1]: cri-containerd-81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6.scope: Deactivated successfully. May 14 23:58:03.072464 containerd[1488]: time="2025-05-14T23:58:03.072394499Z" level=info msg="shim disconnected" id=81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6 namespace=k8s.io May 14 23:58:03.072858 containerd[1488]: time="2025-05-14T23:58:03.072687581Z" level=warning msg="cleaning up after shim disconnected" id=81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6 namespace=k8s.io May 14 23:58:03.072858 containerd[1488]: time="2025-05-14T23:58:03.072704181Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:03.112089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6-rootfs.mount: Deactivated successfully. May 14 23:58:03.402941 containerd[1488]: time="2025-05-14T23:58:03.402864239Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:58:03.405367 containerd[1488]: time="2025-05-14T23:58:03.405182334Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:58:03.406333 containerd[1488]: time="2025-05-14T23:58:03.406256701Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:58:03.408751 containerd[1488]: time="2025-05-14T23:58:03.407513749Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.308891639s" May 14 23:58:03.408751 containerd[1488]: time="2025-05-14T23:58:03.407553190Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:58:03.415220 containerd[1488]: time="2025-05-14T23:58:03.414929798Z" level=info msg="CreateContainer within sandbox \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:58:03.431267 containerd[1488]: time="2025-05-14T23:58:03.431208826Z" level=info msg="CreateContainer within sandbox \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\"" May 14 23:58:03.432604 containerd[1488]: time="2025-05-14T23:58:03.432268313Z" level=info msg="StartContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\"" May 14 23:58:03.465764 systemd[1]: Started cri-containerd-eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82.scope - libcontainer container eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82. May 14 23:58:03.496969 containerd[1488]: time="2025-05-14T23:58:03.496918619Z" level=info msg="StartContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" returns successfully" May 14 23:58:03.935522 containerd[1488]: time="2025-05-14T23:58:03.935323031Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:58:03.955426 containerd[1488]: time="2025-05-14T23:58:03.955351563Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\"" May 14 23:58:03.958539 containerd[1488]: time="2025-05-14T23:58:03.957697818Z" level=info msg="StartContainer for \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\"" May 14 23:58:03.971874 kubelet[2755]: I0514 23:58:03.971720 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m8rlh" podStartSLOduration=1.979528965 podStartE2EDuration="8.971701351s" podCreationTimestamp="2025-05-14 23:57:55 +0000 UTC" firstStartedPulling="2025-05-14 23:57:56.417528858 +0000 UTC m=+6.708549528" lastFinishedPulling="2025-05-14 23:58:03.409701164 +0000 UTC m=+13.700721914" observedRunningTime="2025-05-14 23:58:03.943764166 +0000 UTC m=+14.234784916" watchObservedRunningTime="2025-05-14 23:58:03.971701351 +0000 UTC m=+14.262722061" May 14 23:58:03.997659 systemd[1]: Started cri-containerd-614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49.scope - libcontainer container 614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49. May 14 23:58:04.064923 containerd[1488]: time="2025-05-14T23:58:04.064875718Z" level=info msg="StartContainer for \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\" returns successfully" May 14 23:58:04.065994 systemd[1]: cri-containerd-614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49.scope: Deactivated successfully. May 14 23:58:04.130367 containerd[1488]: time="2025-05-14T23:58:04.130284024Z" level=info msg="shim disconnected" id=614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49 namespace=k8s.io May 14 23:58:04.130367 containerd[1488]: time="2025-05-14T23:58:04.130356624Z" level=warning msg="cleaning up after shim disconnected" id=614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49 namespace=k8s.io May 14 23:58:04.130367 containerd[1488]: time="2025-05-14T23:58:04.130365424Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:58:04.146080 containerd[1488]: time="2025-05-14T23:58:04.146022896Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:58:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:58:04.943515 containerd[1488]: time="2025-05-14T23:58:04.943230085Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:58:04.964744 containerd[1488]: time="2025-05-14T23:58:04.962892065Z" level=info msg="CreateContainer within sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\"" May 14 23:58:04.964744 containerd[1488]: time="2025-05-14T23:58:04.963634871Z" level=info msg="StartContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\"" May 14 23:58:05.001730 systemd[1]: Started cri-containerd-2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe.scope - libcontainer container 2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe. May 14 23:58:05.049961 containerd[1488]: time="2025-05-14T23:58:05.049918029Z" level=info msg="StartContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" returns successfully" May 14 23:58:05.149830 kubelet[2755]: I0514 23:58:05.149711 2755 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 23:58:05.191522 systemd[1]: Created slice kubepods-burstable-pod2cbb87aa_e8e0_4349_957f_78fee5d164b6.slice - libcontainer container kubepods-burstable-pod2cbb87aa_e8e0_4349_957f_78fee5d164b6.slice. May 14 23:58:05.198572 systemd[1]: Created slice kubepods-burstable-pod18366fa1_f062_475e_bdfa_396be96f3c3e.slice - libcontainer container kubepods-burstable-pod18366fa1_f062_475e_bdfa_396be96f3c3e.slice. May 14 23:58:05.218074 kubelet[2755]: I0514 23:58:05.217941 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhcs\" (UniqueName: \"kubernetes.io/projected/18366fa1-f062-475e-bdfa-396be96f3c3e-kube-api-access-skhcs\") pod \"coredns-6f6b679f8f-j6m2g\" (UID: \"18366fa1-f062-475e-bdfa-396be96f3c3e\") " pod="kube-system/coredns-6f6b679f8f-j6m2g" May 14 23:58:05.218074 kubelet[2755]: I0514 23:58:05.218043 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18366fa1-f062-475e-bdfa-396be96f3c3e-config-volume\") pod \"coredns-6f6b679f8f-j6m2g\" (UID: \"18366fa1-f062-475e-bdfa-396be96f3c3e\") " pod="kube-system/coredns-6f6b679f8f-j6m2g" May 14 23:58:05.218251 kubelet[2755]: I0514 23:58:05.218099 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbb87aa-e8e0-4349-957f-78fee5d164b6-config-volume\") pod \"coredns-6f6b679f8f-fk278\" (UID: \"2cbb87aa-e8e0-4349-957f-78fee5d164b6\") " pod="kube-system/coredns-6f6b679f8f-fk278" May 14 23:58:05.218251 kubelet[2755]: I0514 23:58:05.218119 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk5vq\" (UniqueName: \"kubernetes.io/projected/2cbb87aa-e8e0-4349-957f-78fee5d164b6-kube-api-access-qk5vq\") pod \"coredns-6f6b679f8f-fk278\" (UID: \"2cbb87aa-e8e0-4349-957f-78fee5d164b6\") " pod="kube-system/coredns-6f6b679f8f-fk278" May 14 23:58:05.496098 containerd[1488]: time="2025-05-14T23:58:05.495969105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fk278,Uid:2cbb87aa-e8e0-4349-957f-78fee5d164b6,Namespace:kube-system,Attempt:0,}" May 14 23:58:05.504420 containerd[1488]: time="2025-05-14T23:58:05.504297208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j6m2g,Uid:18366fa1-f062-475e-bdfa-396be96f3c3e,Namespace:kube-system,Attempt:0,}" May 14 23:58:07.301204 systemd-networkd[1393]: cilium_host: Link UP May 14 23:58:07.302160 systemd-networkd[1393]: cilium_net: Link UP May 14 23:58:07.302545 systemd-networkd[1393]: cilium_net: Gained carrier May 14 23:58:07.302690 systemd-networkd[1393]: cilium_host: Gained carrier May 14 23:58:07.410915 systemd-networkd[1393]: cilium_vxlan: Link UP May 14 23:58:07.410926 systemd-networkd[1393]: cilium_vxlan: Gained carrier May 14 23:58:07.641617 systemd-networkd[1393]: cilium_host: Gained IPv6LL May 14 23:58:07.649632 systemd-networkd[1393]: cilium_net: Gained IPv6LL May 14 23:58:07.682771 kernel: NET: Registered PF_ALG protocol family May 14 23:58:08.385161 systemd-networkd[1393]: lxc_health: Link UP May 14 23:58:08.394840 systemd-networkd[1393]: lxc_health: Gained carrier May 14 23:58:08.586447 kernel: eth0: renamed from tmpf2dcf May 14 23:58:08.590242 systemd-networkd[1393]: lxc26ce740b1a7c: Link UP May 14 23:58:08.598570 kernel: eth0: renamed from tmpc51a9 May 14 23:58:08.605738 systemd-networkd[1393]: lxc215bafd08ec6: Link UP May 14 23:58:08.605942 systemd-networkd[1393]: lxc26ce740b1a7c: Gained carrier May 14 23:58:08.609839 systemd-networkd[1393]: lxc215bafd08ec6: Gained carrier May 14 23:58:08.817696 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL May 14 23:58:09.649666 systemd-networkd[1393]: lxc_health: Gained IPv6LL May 14 23:58:09.778115 systemd-networkd[1393]: lxc215bafd08ec6: Gained IPv6LL May 14 23:58:10.034034 systemd-networkd[1393]: lxc26ce740b1a7c: Gained IPv6LL May 14 23:58:10.084364 kubelet[2755]: I0514 23:58:10.083554 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zkgh7" podStartSLOduration=10.132036111 podStartE2EDuration="15.083537753s" podCreationTimestamp="2025-05-14 23:57:55 +0000 UTC" firstStartedPulling="2025-05-14 23:57:56.146245744 +0000 UTC m=+6.437266454" lastFinishedPulling="2025-05-14 23:58:01.097747386 +0000 UTC m=+11.388768096" observedRunningTime="2025-05-14 23:58:05.965517359 +0000 UTC m=+16.256538069" watchObservedRunningTime="2025-05-14 23:58:10.083537753 +0000 UTC m=+20.374558423" May 14 23:58:12.461524 containerd[1488]: time="2025-05-14T23:58:12.461293015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:58:12.462315 containerd[1488]: time="2025-05-14T23:58:12.461472936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:58:12.462315 containerd[1488]: time="2025-05-14T23:58:12.461489017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:12.462974 containerd[1488]: time="2025-05-14T23:58:12.462745910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:12.480066 containerd[1488]: time="2025-05-14T23:58:12.479763052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:58:12.480066 containerd[1488]: time="2025-05-14T23:58:12.479819693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:58:12.480066 containerd[1488]: time="2025-05-14T23:58:12.479836173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:12.480066 containerd[1488]: time="2025-05-14T23:58:12.479915614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:58:12.501658 systemd[1]: Started cri-containerd-f2dcf336e5418129b2d009de1c9998ba08849033ceb0c43fe9c16061c50a9530.scope - libcontainer container f2dcf336e5418129b2d009de1c9998ba08849033ceb0c43fe9c16061c50a9530. May 14 23:58:12.529392 systemd[1]: Started cri-containerd-c51a937783383ad4e3d52639a2ea01266e6f0fd0e9682651ec91a7015b68c02c.scope - libcontainer container c51a937783383ad4e3d52639a2ea01266e6f0fd0e9682651ec91a7015b68c02c. May 14 23:58:12.588774 containerd[1488]: time="2025-05-14T23:58:12.588598737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j6m2g,Uid:18366fa1-f062-475e-bdfa-396be96f3c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2dcf336e5418129b2d009de1c9998ba08849033ceb0c43fe9c16061c50a9530\"" May 14 23:58:12.595967 containerd[1488]: time="2025-05-14T23:58:12.595834454Z" level=info msg="CreateContainer within sandbox \"f2dcf336e5418129b2d009de1c9998ba08849033ceb0c43fe9c16061c50a9530\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:58:12.600678 containerd[1488]: time="2025-05-14T23:58:12.600531065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fk278,Uid:2cbb87aa-e8e0-4349-957f-78fee5d164b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c51a937783383ad4e3d52639a2ea01266e6f0fd0e9682651ec91a7015b68c02c\"" May 14 23:58:12.608164 containerd[1488]: time="2025-05-14T23:58:12.608117786Z" level=info msg="CreateContainer within sandbox \"c51a937783383ad4e3d52639a2ea01266e6f0fd0e9682651ec91a7015b68c02c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:58:12.626022 containerd[1488]: time="2025-05-14T23:58:12.625783015Z" level=info msg="CreateContainer within sandbox \"f2dcf336e5418129b2d009de1c9998ba08849033ceb0c43fe9c16061c50a9530\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45dba0368a311b3e7835bce3c6c4623f743b0c65e488efb37738c4c6ff798fb4\"" May 14 23:58:12.626882 containerd[1488]: time="2025-05-14T23:58:12.626850826Z" level=info msg="StartContainer for \"45dba0368a311b3e7835bce3c6c4623f743b0c65e488efb37738c4c6ff798fb4\"" May 14 23:58:12.650089 containerd[1488]: time="2025-05-14T23:58:12.649950234Z" level=info msg="CreateContainer within sandbox \"c51a937783383ad4e3d52639a2ea01266e6f0fd0e9682651ec91a7015b68c02c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee65bf7388617846cf32b79ade18fa882f9f0c0b7749ffbbb35e20c260df3fb3\"" May 14 23:58:12.651354 containerd[1488]: time="2025-05-14T23:58:12.651074846Z" level=info msg="StartContainer for \"ee65bf7388617846cf32b79ade18fa882f9f0c0b7749ffbbb35e20c260df3fb3\"" May 14 23:58:12.679279 systemd[1]: Started cri-containerd-45dba0368a311b3e7835bce3c6c4623f743b0c65e488efb37738c4c6ff798fb4.scope - libcontainer container 45dba0368a311b3e7835bce3c6c4623f743b0c65e488efb37738c4c6ff798fb4. May 14 23:58:12.697588 systemd[1]: Started cri-containerd-ee65bf7388617846cf32b79ade18fa882f9f0c0b7749ffbbb35e20c260df3fb3.scope - libcontainer container ee65bf7388617846cf32b79ade18fa882f9f0c0b7749ffbbb35e20c260df3fb3. May 14 23:58:12.721861 containerd[1488]: time="2025-05-14T23:58:12.721726442Z" level=info msg="StartContainer for \"45dba0368a311b3e7835bce3c6c4623f743b0c65e488efb37738c4c6ff798fb4\" returns successfully" May 14 23:58:12.737990 containerd[1488]: time="2025-05-14T23:58:12.737846574Z" level=info msg="StartContainer for \"ee65bf7388617846cf32b79ade18fa882f9f0c0b7749ffbbb35e20c260df3fb3\" returns successfully" May 14 23:58:12.983612 kubelet[2755]: I0514 23:58:12.981667 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j6m2g" podStartSLOduration=17.981649824 podStartE2EDuration="17.981649824s" podCreationTimestamp="2025-05-14 23:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:58:12.979675243 +0000 UTC m=+23.270695953" watchObservedRunningTime="2025-05-14 23:58:12.981649824 +0000 UTC m=+23.272670494" May 14 23:58:12.998486 kubelet[2755]: I0514 23:58:12.998402 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fk278" podStartSLOduration=17.998378003 podStartE2EDuration="17.998378003s" podCreationTimestamp="2025-05-14 23:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:58:12.995474972 +0000 UTC m=+23.286495682" watchObservedRunningTime="2025-05-14 23:58:12.998378003 +0000 UTC m=+23.289398713" May 15 00:00:21.609839 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 15 00:00:21.618092 systemd[1]: logrotate.service: Deactivated successfully. May 15 00:00:58.445932 systemd[1]: Started sshd@7-91.99.8.230:22-194.0.234.19:40562.service - OpenSSH per-connection server daemon (194.0.234.19:40562). May 15 00:00:59.591293 sshd[4169]: Connection closed by authenticating user nobody 194.0.234.19 port 40562 [preauth] May 15 00:00:59.595459 systemd[1]: sshd@7-91.99.8.230:22-194.0.234.19:40562.service: Deactivated successfully. May 15 00:02:28.516694 systemd[1]: Started sshd@8-91.99.8.230:22-147.75.109.163:56622.service - OpenSSH per-connection server daemon (147.75.109.163:56622). May 15 00:02:29.508155 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 56622 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:29.510377 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:29.517839 systemd-logind[1468]: New session 8 of user core. May 15 00:02:29.526781 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:02:30.280358 sshd[4184]: Connection closed by 147.75.109.163 port 56622 May 15 00:02:30.281330 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 15 00:02:30.286885 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. May 15 00:02:30.288865 systemd[1]: sshd@8-91.99.8.230:22-147.75.109.163:56622.service: Deactivated successfully. May 15 00:02:30.294060 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:02:30.295374 systemd-logind[1468]: Removed session 8. May 15 00:02:35.462913 systemd[1]: Started sshd@9-91.99.8.230:22-147.75.109.163:56624.service - OpenSSH per-connection server daemon (147.75.109.163:56624). May 15 00:02:36.445636 sshd[4197]: Accepted publickey for core from 147.75.109.163 port 56624 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:36.448080 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:36.453186 systemd-logind[1468]: New session 9 of user core. May 15 00:02:36.458720 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:02:37.209628 sshd[4199]: Connection closed by 147.75.109.163 port 56624 May 15 00:02:37.210133 sshd-session[4197]: pam_unix(sshd:session): session closed for user core May 15 00:02:37.215213 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. May 15 00:02:37.215357 systemd[1]: sshd@9-91.99.8.230:22-147.75.109.163:56624.service: Deactivated successfully. May 15 00:02:37.218392 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:02:37.219726 systemd-logind[1468]: Removed session 9. May 15 00:02:42.402908 systemd[1]: Started sshd@10-91.99.8.230:22-147.75.109.163:34268.service - OpenSSH per-connection server daemon (147.75.109.163:34268). May 15 00:02:43.412557 sshd[4211]: Accepted publickey for core from 147.75.109.163 port 34268 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:43.414546 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:43.419511 systemd-logind[1468]: New session 10 of user core. May 15 00:02:43.428680 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:02:44.182583 sshd[4213]: Connection closed by 147.75.109.163 port 34268 May 15 00:02:44.183594 sshd-session[4211]: pam_unix(sshd:session): session closed for user core May 15 00:02:44.188157 systemd[1]: sshd@10-91.99.8.230:22-147.75.109.163:34268.service: Deactivated successfully. May 15 00:02:44.192384 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:02:44.193345 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. May 15 00:02:44.194927 systemd-logind[1468]: Removed session 10. May 15 00:02:44.367081 systemd[1]: Started sshd@11-91.99.8.230:22-147.75.109.163:34272.service - OpenSSH per-connection server daemon (147.75.109.163:34272). May 15 00:02:45.373781 sshd[4226]: Accepted publickey for core from 147.75.109.163 port 34272 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:45.376189 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:45.382459 systemd-logind[1468]: New session 11 of user core. May 15 00:02:45.384603 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:02:46.175523 sshd[4229]: Connection closed by 147.75.109.163 port 34272 May 15 00:02:46.176467 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 15 00:02:46.181310 systemd[1]: sshd@11-91.99.8.230:22-147.75.109.163:34272.service: Deactivated successfully. May 15 00:02:46.183614 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:02:46.184727 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. May 15 00:02:46.185639 systemd-logind[1468]: Removed session 11. May 15 00:02:46.349755 systemd[1]: Started sshd@12-91.99.8.230:22-147.75.109.163:34288.service - OpenSSH per-connection server daemon (147.75.109.163:34288). May 15 00:02:47.331208 sshd[4239]: Accepted publickey for core from 147.75.109.163 port 34288 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:47.333482 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:47.338747 systemd-logind[1468]: New session 12 of user core. May 15 00:02:47.347613 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:02:48.080437 sshd[4241]: Connection closed by 147.75.109.163 port 34288 May 15 00:02:48.079701 sshd-session[4239]: pam_unix(sshd:session): session closed for user core May 15 00:02:48.084938 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. May 15 00:02:48.085738 systemd[1]: sshd@12-91.99.8.230:22-147.75.109.163:34288.service: Deactivated successfully. May 15 00:02:48.088355 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:02:48.090354 systemd-logind[1468]: Removed session 12. May 15 00:02:53.266933 systemd[1]: Started sshd@13-91.99.8.230:22-147.75.109.163:55802.service - OpenSSH per-connection server daemon (147.75.109.163:55802). May 15 00:02:54.263867 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 55802 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:54.265961 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:54.270895 systemd-logind[1468]: New session 13 of user core. May 15 00:02:54.275625 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:02:55.031179 sshd[4256]: Connection closed by 147.75.109.163 port 55802 May 15 00:02:55.030988 sshd-session[4254]: pam_unix(sshd:session): session closed for user core May 15 00:02:55.036891 systemd[1]: sshd@13-91.99.8.230:22-147.75.109.163:55802.service: Deactivated successfully. May 15 00:02:55.040354 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:02:55.041869 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. May 15 00:02:55.043224 systemd-logind[1468]: Removed session 13. May 15 00:02:55.215903 systemd[1]: Started sshd@14-91.99.8.230:22-147.75.109.163:55808.service - OpenSSH per-connection server daemon (147.75.109.163:55808). May 15 00:02:56.227707 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 55808 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:56.230214 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:56.235801 systemd-logind[1468]: New session 14 of user core. May 15 00:02:56.244739 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:02:57.057569 sshd[4270]: Connection closed by 147.75.109.163 port 55808 May 15 00:02:57.058566 sshd-session[4268]: pam_unix(sshd:session): session closed for user core May 15 00:02:57.065085 systemd[1]: sshd@14-91.99.8.230:22-147.75.109.163:55808.service: Deactivated successfully. May 15 00:02:57.068205 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:02:57.069184 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. May 15 00:02:57.071935 systemd-logind[1468]: Removed session 14. May 15 00:02:57.227673 systemd[1]: Started sshd@15-91.99.8.230:22-147.75.109.163:55812.service - OpenSSH per-connection server daemon (147.75.109.163:55812). May 15 00:02:58.207108 sshd[4282]: Accepted publickey for core from 147.75.109.163 port 55812 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:02:58.210457 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:58.217501 systemd-logind[1468]: New session 15 of user core. May 15 00:02:58.225765 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:03:00.540527 sshd[4284]: Connection closed by 147.75.109.163 port 55812 May 15 00:03:00.539330 sshd-session[4282]: pam_unix(sshd:session): session closed for user core May 15 00:03:00.546942 systemd[1]: sshd@15-91.99.8.230:22-147.75.109.163:55812.service: Deactivated successfully. May 15 00:03:00.549819 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:03:00.550957 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. May 15 00:03:00.552235 systemd-logind[1468]: Removed session 15. May 15 00:03:00.715844 systemd[1]: Started sshd@16-91.99.8.230:22-147.75.109.163:52808.service - OpenSSH per-connection server daemon (147.75.109.163:52808). May 15 00:03:01.697693 sshd[4301]: Accepted publickey for core from 147.75.109.163 port 52808 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:01.700110 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:01.706931 systemd-logind[1468]: New session 16 of user core. May 15 00:03:01.718735 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:03:02.579139 sshd[4303]: Connection closed by 147.75.109.163 port 52808 May 15 00:03:02.579638 sshd-session[4301]: pam_unix(sshd:session): session closed for user core May 15 00:03:02.584369 systemd[1]: sshd@16-91.99.8.230:22-147.75.109.163:52808.service: Deactivated successfully. May 15 00:03:02.586125 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:03:02.586948 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. May 15 00:03:02.588232 systemd-logind[1468]: Removed session 16. May 15 00:03:02.750739 systemd[1]: Started sshd@17-91.99.8.230:22-147.75.109.163:52810.service - OpenSSH per-connection server daemon (147.75.109.163:52810). May 15 00:03:03.731197 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 52810 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:03.733337 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:03.741232 systemd-logind[1468]: New session 17 of user core. May 15 00:03:03.746735 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:03:04.479085 sshd[4315]: Connection closed by 147.75.109.163 port 52810 May 15 00:03:04.478273 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 15 00:03:04.483941 systemd[1]: sshd@17-91.99.8.230:22-147.75.109.163:52810.service: Deactivated successfully. May 15 00:03:04.487100 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:03:04.488724 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. May 15 00:03:04.490914 systemd-logind[1468]: Removed session 17. May 15 00:03:09.668995 systemd[1]: Started sshd@18-91.99.8.230:22-147.75.109.163:49156.service - OpenSSH per-connection server daemon (147.75.109.163:49156). May 15 00:03:10.660229 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 49156 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:10.662616 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:10.670543 systemd-logind[1468]: New session 18 of user core. May 15 00:03:10.679793 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:03:11.420564 sshd[4331]: Connection closed by 147.75.109.163 port 49156 May 15 00:03:11.421510 sshd-session[4329]: pam_unix(sshd:session): session closed for user core May 15 00:03:11.427792 systemd[1]: sshd@18-91.99.8.230:22-147.75.109.163:49156.service: Deactivated successfully. May 15 00:03:11.431009 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:03:11.432268 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. May 15 00:03:11.433832 systemd-logind[1468]: Removed session 18. May 15 00:03:16.601762 systemd[1]: Started sshd@19-91.99.8.230:22-147.75.109.163:49160.service - OpenSSH per-connection server daemon (147.75.109.163:49160). May 15 00:03:17.599140 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 49160 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:17.601575 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:17.607309 systemd-logind[1468]: New session 19 of user core. May 15 00:03:17.614734 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:03:18.356797 sshd[4346]: Connection closed by 147.75.109.163 port 49160 May 15 00:03:18.357734 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 15 00:03:18.362099 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. May 15 00:03:18.362672 systemd[1]: sshd@19-91.99.8.230:22-147.75.109.163:49160.service: Deactivated successfully. May 15 00:03:18.364969 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:03:18.366256 systemd-logind[1468]: Removed session 19. May 15 00:03:18.535819 systemd[1]: Started sshd@20-91.99.8.230:22-147.75.109.163:37138.service - OpenSSH per-connection server daemon (147.75.109.163:37138). May 15 00:03:19.535630 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 37138 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:19.537996 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:19.543676 systemd-logind[1468]: New session 20 of user core. May 15 00:03:19.551764 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:03:23.237330 systemd[1]: run-containerd-runc-k8s.io-2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe-runc.1Ts6v6.mount: Deactivated successfully. May 15 00:03:23.239306 containerd[1488]: time="2025-05-15T00:03:23.237386382Z" level=info msg="StopContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" with timeout 30 (s)" May 15 00:03:23.239306 containerd[1488]: time="2025-05-15T00:03:23.238313278Z" level=info msg="Stop container \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" with signal terminated" May 15 00:03:23.254205 systemd[1]: cri-containerd-eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82.scope: Deactivated successfully. May 15 00:03:23.256997 containerd[1488]: time="2025-05-15T00:03:23.256557274Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:03:23.272009 containerd[1488]: time="2025-05-15T00:03:23.271507493Z" level=info msg="StopContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" with timeout 2 (s)" May 15 00:03:23.273462 containerd[1488]: time="2025-05-15T00:03:23.273361005Z" level=info msg="Stop container \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" with signal terminated" May 15 00:03:23.286858 systemd-networkd[1393]: lxc_health: Link DOWN May 15 00:03:23.287724 systemd-networkd[1393]: lxc_health: Lost carrier May 15 00:03:23.291969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82-rootfs.mount: Deactivated successfully. May 15 00:03:23.307487 containerd[1488]: time="2025-05-15T00:03:23.306952826Z" level=info msg="shim disconnected" id=eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82 namespace=k8s.io May 15 00:03:23.307487 containerd[1488]: time="2025-05-15T00:03:23.307011627Z" level=warning msg="cleaning up after shim disconnected" id=eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82 namespace=k8s.io May 15 00:03:23.307487 containerd[1488]: time="2025-05-15T00:03:23.307030228Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:23.307804 systemd[1]: cri-containerd-2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe.scope: Deactivated successfully. May 15 00:03:23.308091 systemd[1]: cri-containerd-2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe.scope: Consumed 7.664s CPU time, 125.1M memory peak, 128K read from disk, 12.9M written to disk. May 15 00:03:23.333687 containerd[1488]: time="2025-05-15T00:03:23.333626488Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:03:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:03:23.335895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe-rootfs.mount: Deactivated successfully. May 15 00:03:23.337081 containerd[1488]: time="2025-05-15T00:03:23.335375118Z" level=info msg="shim disconnected" id=2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe namespace=k8s.io May 15 00:03:23.337081 containerd[1488]: time="2025-05-15T00:03:23.336927905Z" level=warning msg="cleaning up after shim disconnected" id=2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe namespace=k8s.io May 15 00:03:23.337081 containerd[1488]: time="2025-05-15T00:03:23.336938945Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:23.337435 containerd[1488]: time="2025-05-15T00:03:23.337340592Z" level=info msg="StopContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" returns successfully" May 15 00:03:23.338299 containerd[1488]: time="2025-05-15T00:03:23.338108285Z" level=info msg="StopPodSandbox for \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\"" May 15 00:03:23.338299 containerd[1488]: time="2025-05-15T00:03:23.338148806Z" level=info msg="Container to stop \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.342774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12-shm.mount: Deactivated successfully. May 15 00:03:23.352917 systemd[1]: cri-containerd-9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12.scope: Deactivated successfully. May 15 00:03:23.365076 containerd[1488]: time="2025-05-15T00:03:23.364944910Z" level=info msg="StopContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" returns successfully" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365586761Z" level=info msg="StopPodSandbox for \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\"" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365618481Z" level=info msg="Container to stop \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365628562Z" level=info msg="Container to stop \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365636602Z" level=info msg="Container to stop \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365644922Z" level=info msg="Container to stop \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.365849 containerd[1488]: time="2025-05-15T00:03:23.365695123Z" level=info msg="Container to stop \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:03:23.373190 systemd[1]: cri-containerd-7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8.scope: Deactivated successfully. May 15 00:03:23.386311 containerd[1488]: time="2025-05-15T00:03:23.385616788Z" level=info msg="shim disconnected" id=9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12 namespace=k8s.io May 15 00:03:23.386311 containerd[1488]: time="2025-05-15T00:03:23.386289159Z" level=warning msg="cleaning up after shim disconnected" id=9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12 namespace=k8s.io May 15 00:03:23.386311 containerd[1488]: time="2025-05-15T00:03:23.386299439Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:23.399760 containerd[1488]: time="2025-05-15T00:03:23.399552949Z" level=info msg="shim disconnected" id=7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8 namespace=k8s.io May 15 00:03:23.399760 containerd[1488]: time="2025-05-15T00:03:23.399607190Z" level=warning msg="cleaning up after shim disconnected" id=7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8 namespace=k8s.io May 15 00:03:23.399760 containerd[1488]: time="2025-05-15T00:03:23.399615230Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:23.404080 containerd[1488]: time="2025-05-15T00:03:23.403519537Z" level=info msg="TearDown network for sandbox \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\" successfully" May 15 00:03:23.404080 containerd[1488]: time="2025-05-15T00:03:23.403561658Z" level=info msg="StopPodSandbox for \"9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12\" returns successfully" May 15 00:03:23.419877 containerd[1488]: time="2025-05-15T00:03:23.419819979Z" level=info msg="TearDown network for sandbox \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" successfully" May 15 00:03:23.419877 containerd[1488]: time="2025-05-15T00:03:23.419868980Z" level=info msg="StopPodSandbox for \"7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8\" returns successfully" May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538668 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-xtables-lock\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538771 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-kernel\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538813 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-hubble-tls\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538852 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a9906e-b329-43ff-8780-91ead501c379-cilium-config-path\") pod \"c6a9906e-b329-43ff-8780-91ead501c379\" (UID: \"c6a9906e-b329-43ff-8780-91ead501c379\") " May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538883 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-bpf-maps\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.538929 kubelet[2755]: I0515 00:03:23.538930 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cni-path\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.538961 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-run\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.538988 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-etc-cni-netd\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.539015 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-lib-modules\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.539046 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5e917e-4836-47e0-9b1f-de5afb939f13-clustermesh-secrets\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.539078 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-config-path\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.539937 kubelet[2755]: I0515 00:03:23.539104 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-cgroup\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.540278 kubelet[2755]: I0515 00:03:23.539133 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xv7r\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-kube-api-access-6xv7r\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.540278 kubelet[2755]: I0515 00:03:23.539160 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-hostproc\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.540278 kubelet[2755]: I0515 00:03:23.539185 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-net\") pod \"3d5e917e-4836-47e0-9b1f-de5afb939f13\" (UID: \"3d5e917e-4836-47e0-9b1f-de5afb939f13\") " May 15 00:03:23.540278 kubelet[2755]: I0515 00:03:23.539217 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxtjl\" (UniqueName: \"kubernetes.io/projected/c6a9906e-b329-43ff-8780-91ead501c379-kube-api-access-dxtjl\") pod \"c6a9906e-b329-43ff-8780-91ead501c379\" (UID: \"c6a9906e-b329-43ff-8780-91ead501c379\") " May 15 00:03:23.540809 kubelet[2755]: I0515 00:03:23.540730 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.541234 kubelet[2755]: I0515 00:03:23.541055 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.541727 kubelet[2755]: I0515 00:03:23.541153 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.546351 kubelet[2755]: I0515 00:03:23.545871 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.546640 kubelet[2755]: I0515 00:03:23.546616 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.546794 kubelet[2755]: I0515 00:03:23.546779 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.546885 kubelet[2755]: I0515 00:03:23.546871 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.548415 kubelet[2755]: I0515 00:03:23.548369 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.548631 kubelet[2755]: I0515 00:03:23.548530 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.548821 kubelet[2755]: I0515 00:03:23.548678 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:03:23.548922 kubelet[2755]: I0515 00:03:23.548903 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a9906e-b329-43ff-8780-91ead501c379-kube-api-access-dxtjl" (OuterVolumeSpecName: "kube-api-access-dxtjl") pod "c6a9906e-b329-43ff-8780-91ead501c379" (UID: "c6a9906e-b329-43ff-8780-91ead501c379"). InnerVolumeSpecName "kube-api-access-dxtjl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:03:23.550606 kubelet[2755]: I0515 00:03:23.550580 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:03:23.550879 kubelet[2755]: I0515 00:03:23.550820 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:03:23.551030 kubelet[2755]: I0515 00:03:23.550966 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5e917e-4836-47e0-9b1f-de5afb939f13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:03:23.553076 kubelet[2755]: I0515 00:03:23.552905 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a9906e-b329-43ff-8780-91ead501c379-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6a9906e-b329-43ff-8780-91ead501c379" (UID: "c6a9906e-b329-43ff-8780-91ead501c379"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:03:23.554315 kubelet[2755]: I0515 00:03:23.554257 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-kube-api-access-6xv7r" (OuterVolumeSpecName: "kube-api-access-6xv7r") pod "3d5e917e-4836-47e0-9b1f-de5afb939f13" (UID: "3d5e917e-4836-47e0-9b1f-de5afb939f13"). InnerVolumeSpecName "kube-api-access-6xv7r". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639518 2755 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-config-path\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639557 2755 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-cgroup\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639569 2755 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6xv7r\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-kube-api-access-6xv7r\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639580 2755 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-hostproc\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639590 2755 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-net\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639604 2755 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dxtjl\" (UniqueName: \"kubernetes.io/projected/c6a9906e-b329-43ff-8780-91ead501c379-kube-api-access-dxtjl\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639613 2755 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-xtables-lock\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.639768 kubelet[2755]: I0515 00:03:23.639623 2755 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-host-proc-sys-kernel\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639634 2755 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5e917e-4836-47e0-9b1f-de5afb939f13-hubble-tls\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639680 2755 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a9906e-b329-43ff-8780-91ead501c379-cilium-config-path\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639693 2755 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-bpf-maps\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639704 2755 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cni-path\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639713 2755 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-cilium-run\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639723 2755 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-etc-cni-netd\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639733 2755 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5e917e-4836-47e0-9b1f-de5afb939f13-lib-modules\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.640091 kubelet[2755]: I0515 00:03:23.639742 2755 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5e917e-4836-47e0-9b1f-de5afb939f13-clustermesh-secrets\") on node \"ci-4230-1-1-n-df83517ae5\" DevicePath \"\"" May 15 00:03:23.749214 kubelet[2755]: I0515 00:03:23.748576 2755 scope.go:117] "RemoveContainer" containerID="eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82" May 15 00:03:23.752538 containerd[1488]: time="2025-05-15T00:03:23.752499537Z" level=info msg="RemoveContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\"" May 15 00:03:23.763017 systemd[1]: Removed slice kubepods-besteffort-podc6a9906e_b329_43ff_8780_91ead501c379.slice - libcontainer container kubepods-besteffort-podc6a9906e_b329_43ff_8780_91ead501c379.slice. May 15 00:03:23.768570 containerd[1488]: time="2025-05-15T00:03:23.767276192Z" level=info msg="RemoveContainer for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" returns successfully" May 15 00:03:23.768112 systemd[1]: Removed slice kubepods-burstable-pod3d5e917e_4836_47e0_9b1f_de5afb939f13.slice - libcontainer container kubepods-burstable-pod3d5e917e_4836_47e0_9b1f_de5afb939f13.slice. May 15 00:03:23.768199 systemd[1]: kubepods-burstable-pod3d5e917e_4836_47e0_9b1f_de5afb939f13.slice: Consumed 7.757s CPU time, 125.6M memory peak, 128K read from disk, 12.9M written to disk. May 15 00:03:23.769474 kubelet[2755]: I0515 00:03:23.769442 2755 scope.go:117] "RemoveContainer" containerID="eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82" May 15 00:03:23.771647 containerd[1488]: time="2025-05-15T00:03:23.771592187Z" level=error msg="ContainerStatus for \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\": not found" May 15 00:03:23.773847 kubelet[2755]: E0515 00:03:23.772672 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\": not found" containerID="eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82" May 15 00:03:23.773847 kubelet[2755]: I0515 00:03:23.772727 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82"} err="failed to get container status \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb3baab11c4783d65ac3d7d1190b0ef4a83063240b0360368db5248334e20e82\": not found" May 15 00:03:23.773847 kubelet[2755]: I0515 00:03:23.773521 2755 scope.go:117] "RemoveContainer" containerID="2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe" May 15 00:03:23.776548 containerd[1488]: time="2025-05-15T00:03:23.776504472Z" level=info msg="RemoveContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\"" May 15 00:03:23.780694 containerd[1488]: time="2025-05-15T00:03:23.780609543Z" level=info msg="RemoveContainer for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" returns successfully" May 15 00:03:23.781706 kubelet[2755]: I0515 00:03:23.781301 2755 scope.go:117] "RemoveContainer" containerID="614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49" May 15 00:03:23.785491 containerd[1488]: time="2025-05-15T00:03:23.785153142Z" level=info msg="RemoveContainer for \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\"" May 15 00:03:23.792777 containerd[1488]: time="2025-05-15T00:03:23.792383067Z" level=info msg="RemoveContainer for \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\" returns successfully" May 15 00:03:23.793334 kubelet[2755]: I0515 00:03:23.793150 2755 scope.go:117] "RemoveContainer" containerID="81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6" May 15 00:03:23.796464 containerd[1488]: time="2025-05-15T00:03:23.796374816Z" level=info msg="RemoveContainer for \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\"" May 15 00:03:23.801324 containerd[1488]: time="2025-05-15T00:03:23.801268861Z" level=info msg="RemoveContainer for \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\" returns successfully" May 15 00:03:23.801744 kubelet[2755]: I0515 00:03:23.801609 2755 scope.go:117] "RemoveContainer" containerID="b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c" May 15 00:03:23.804757 containerd[1488]: time="2025-05-15T00:03:23.804228232Z" level=info msg="RemoveContainer for \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\"" May 15 00:03:23.809884 containerd[1488]: time="2025-05-15T00:03:23.809699727Z" level=info msg="RemoveContainer for \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\" returns successfully" May 15 00:03:23.809964 kubelet[2755]: I0515 00:03:23.809913 2755 scope.go:117] "RemoveContainer" containerID="08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1" May 15 00:03:23.811581 containerd[1488]: time="2025-05-15T00:03:23.811139431Z" level=info msg="RemoveContainer for \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\"" May 15 00:03:23.815256 containerd[1488]: time="2025-05-15T00:03:23.815199582Z" level=info msg="RemoveContainer for \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\" returns successfully" May 15 00:03:23.815736 kubelet[2755]: I0515 00:03:23.815683 2755 scope.go:117] "RemoveContainer" containerID="2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe" May 15 00:03:23.816488 containerd[1488]: time="2025-05-15T00:03:23.816095837Z" level=error msg="ContainerStatus for \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\": not found" May 15 00:03:23.816824 kubelet[2755]: E0515 00:03:23.816326 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\": not found" containerID="2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe" May 15 00:03:23.816824 kubelet[2755]: I0515 00:03:23.816374 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe"} err="failed to get container status \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f420f8be8fd1232bc4982e7d31a809a2c2ce37147bb496421a615c3015b53fe\": not found" May 15 00:03:23.816824 kubelet[2755]: I0515 00:03:23.816446 2755 scope.go:117] "RemoveContainer" containerID="614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49" May 15 00:03:23.816914 containerd[1488]: time="2025-05-15T00:03:23.816708488Z" level=error msg="ContainerStatus for \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\": not found" May 15 00:03:23.817345 kubelet[2755]: E0515 00:03:23.817130 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\": not found" containerID="614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49" May 15 00:03:23.817345 kubelet[2755]: I0515 00:03:23.817162 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49"} err="failed to get container status \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\": rpc error: code = NotFound desc = an error occurred when try to find container \"614ff33dfad93aa60668a0e6718df960898343f1c1949b923383e559755f2e49\": not found" May 15 00:03:23.817345 kubelet[2755]: I0515 00:03:23.817178 2755 scope.go:117] "RemoveContainer" containerID="81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6" May 15 00:03:23.817716 kubelet[2755]: E0515 00:03:23.817519 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\": not found" containerID="81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6" May 15 00:03:23.817716 kubelet[2755]: I0515 00:03:23.817540 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6"} err="failed to get container status \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\": not found" May 15 00:03:23.817716 kubelet[2755]: I0515 00:03:23.817556 2755 scope.go:117] "RemoveContainer" containerID="b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c" May 15 00:03:23.817802 containerd[1488]: time="2025-05-15T00:03:23.817377019Z" level=error msg="ContainerStatus for \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81a9d6e793a0df8c3af871905c681630105785235bf1039a8a8a1808c2051cd6\": not found" May 15 00:03:23.818225 containerd[1488]: time="2025-05-15T00:03:23.817953469Z" level=error msg="ContainerStatus for \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\": not found" May 15 00:03:23.818285 kubelet[2755]: E0515 00:03:23.818062 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\": not found" containerID="b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c" May 15 00:03:23.818285 kubelet[2755]: I0515 00:03:23.818085 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c"} err="failed to get container status \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3eac86742ac6a16d3386b0bccf0baa5ed9fa10410f989904a7f96be53fabf5c\": not found" May 15 00:03:23.818285 kubelet[2755]: I0515 00:03:23.818108 2755 scope.go:117] "RemoveContainer" containerID="08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1" May 15 00:03:23.818923 containerd[1488]: time="2025-05-15T00:03:23.818680722Z" level=error msg="ContainerStatus for \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\": not found" May 15 00:03:23.818990 kubelet[2755]: E0515 00:03:23.818816 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\": not found" containerID="08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1" May 15 00:03:23.818990 kubelet[2755]: I0515 00:03:23.818840 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1"} err="failed to get container status \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"08d1fd1476c708d40e5dedef0b3f5e54c96db8840aa5ac110605985d090298d1\": not found" May 15 00:03:23.821629 kubelet[2755]: I0515 00:03:23.821592 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" path="/var/lib/kubelet/pods/3d5e917e-4836-47e0-9b1f-de5afb939f13/volumes" May 15 00:03:23.822741 kubelet[2755]: I0515 00:03:23.822711 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a9906e-b329-43ff-8780-91ead501c379" path="/var/lib/kubelet/pods/c6a9906e-b329-43ff-8780-91ead501c379/volumes" May 15 00:03:24.229184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e03398dcc0018a8a70bf3e4f423293cb1f05bc8502599ee4c79febd94bc2e12-rootfs.mount: Deactivated successfully. May 15 00:03:24.229575 systemd[1]: var-lib-kubelet-pods-c6a9906e\x2db329\x2d43ff\x2d8780\x2d91ead501c379-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxtjl.mount: Deactivated successfully. May 15 00:03:24.229650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8-rootfs.mount: Deactivated successfully. May 15 00:03:24.229744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e83c185ea3503fd5a895dedc4ca70c9ba0aaf78be7f08f47d384e128fe774c8-shm.mount: Deactivated successfully. May 15 00:03:24.229812 systemd[1]: var-lib-kubelet-pods-3d5e917e\x2d4836\x2d47e0\x2d9b1f\x2dde5afb939f13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6xv7r.mount: Deactivated successfully. May 15 00:03:24.229881 systemd[1]: var-lib-kubelet-pods-3d5e917e\x2d4836\x2d47e0\x2d9b1f\x2dde5afb939f13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:03:24.229943 systemd[1]: var-lib-kubelet-pods-3d5e917e\x2d4836\x2d47e0\x2d9b1f\x2dde5afb939f13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:03:25.014025 kubelet[2755]: E0515 00:03:25.013943 2755 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:03:25.313941 sshd[4359]: Connection closed by 147.75.109.163 port 37138 May 15 00:03:25.314799 sshd-session[4357]: pam_unix(sshd:session): session closed for user core May 15 00:03:25.319664 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. May 15 00:03:25.320774 systemd[1]: sshd@20-91.99.8.230:22-147.75.109.163:37138.service: Deactivated successfully. May 15 00:03:25.323981 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:03:25.324250 systemd[1]: session-20.scope: Consumed 2.507s CPU time, 23.6M memory peak. May 15 00:03:25.325399 systemd-logind[1468]: Removed session 20. May 15 00:03:25.492245 systemd[1]: Started sshd@21-91.99.8.230:22-147.75.109.163:37142.service - OpenSSH per-connection server daemon (147.75.109.163:37142). May 15 00:03:26.483948 sshd[4521]: Accepted publickey for core from 147.75.109.163 port 37142 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:26.485779 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:26.492798 systemd-logind[1468]: New session 21 of user core. May 15 00:03:26.496605 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:03:27.059436 kubelet[2755]: I0515 00:03:27.057044 2755 setters.go:600] "Node became not ready" node="ci-4230-1-1-n-df83517ae5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:03:27Z","lastTransitionTime":"2025-05-15T00:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978437 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="mount-cgroup" May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978474 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="apply-sysctl-overwrites" May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978482 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="mount-bpf-fs" May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978488 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="cilium-agent" May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978495 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6a9906e-b329-43ff-8780-91ead501c379" containerName="cilium-operator" May 15 00:03:28.979353 kubelet[2755]: E0515 00:03:28.978501 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="clean-cilium-state" May 15 00:03:28.979353 kubelet[2755]: I0515 00:03:28.978530 2755 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5e917e-4836-47e0-9b1f-de5afb939f13" containerName="cilium-agent" May 15 00:03:28.979353 kubelet[2755]: I0515 00:03:28.978538 2755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a9906e-b329-43ff-8780-91ead501c379" containerName="cilium-operator" May 15 00:03:28.986607 systemd[1]: Created slice kubepods-burstable-pod685b9bd8_dc86_4917_b70f_ffd0bd118bff.slice - libcontainer container kubepods-burstable-pod685b9bd8_dc86_4917_b70f_ffd0bd118bff.slice. May 15 00:03:28.992847 kubelet[2755]: W0515 00:03:28.992813 2755 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-1-1-n-df83517ae5" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object May 15 00:03:28.992994 kubelet[2755]: E0515 00:03:28.992975 2755 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-1-1-n-df83517ae5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object" logger="UnhandledError" May 15 00:03:28.993678 kubelet[2755]: W0515 00:03:28.993098 2755 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-1-n-df83517ae5" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object May 15 00:03:28.993839 kubelet[2755]: E0515 00:03:28.993814 2755 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-1-1-n-df83517ae5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object" logger="UnhandledError" May 15 00:03:28.993960 kubelet[2755]: W0515 00:03:28.993946 2755 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-1-1-n-df83517ae5" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object May 15 00:03:28.994025 kubelet[2755]: E0515 00:03:28.994011 2755 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-1-1-n-df83517ae5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-df83517ae5' and this object" logger="UnhandledError" May 15 00:03:29.073125 kubelet[2755]: I0515 00:03:29.073070 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/685b9bd8-dc86-4917-b70f-ffd0bd118bff-hubble-tls\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.073606 kubelet[2755]: I0515 00:03:29.073530 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-etc-cni-netd\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.073955 kubelet[2755]: I0515 00:03:29.073874 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pph2v\" (UniqueName: \"kubernetes.io/projected/685b9bd8-dc86-4917-b70f-ffd0bd118bff-kube-api-access-pph2v\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.074233 kubelet[2755]: I0515 00:03:29.074137 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-host-proc-sys-kernel\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.074436 kubelet[2755]: I0515 00:03:29.074313 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-bpf-maps\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.074700 kubelet[2755]: I0515 00:03:29.074564 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-hostproc\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.074700 kubelet[2755]: I0515 00:03:29.074635 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-cni-path\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.074821 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/685b9bd8-dc86-4917-b70f-ffd0bd118bff-cilium-config-path\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.074878 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/685b9bd8-dc86-4917-b70f-ffd0bd118bff-cilium-ipsec-secrets\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.074948 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-xtables-lock\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.074991 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/685b9bd8-dc86-4917-b70f-ffd0bd118bff-clustermesh-secrets\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.075028 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-lib-modules\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075427 kubelet[2755]: I0515 00:03:29.075066 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-cilium-cgroup\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075923 kubelet[2755]: I0515 00:03:29.075101 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-host-proc-sys-net\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.075923 kubelet[2755]: I0515 00:03:29.075137 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/685b9bd8-dc86-4917-b70f-ffd0bd118bff-cilium-run\") pod \"cilium-j98j5\" (UID: \"685b9bd8-dc86-4917-b70f-ffd0bd118bff\") " pod="kube-system/cilium-j98j5" May 15 00:03:29.158835 sshd[4525]: Connection closed by 147.75.109.163 port 37142 May 15 00:03:29.159853 sshd-session[4521]: pam_unix(sshd:session): session closed for user core May 15 00:03:29.162898 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. May 15 00:03:29.163073 systemd[1]: sshd@21-91.99.8.230:22-147.75.109.163:37142.service: Deactivated successfully. May 15 00:03:29.165297 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:03:29.165818 systemd[1]: session-21.scope: Consumed 1.845s CPU time, 25.8M memory peak. May 15 00:03:29.167784 systemd-logind[1468]: Removed session 21. May 15 00:03:29.339766 systemd[1]: Started sshd@22-91.99.8.230:22-147.75.109.163:42532.service - OpenSSH per-connection server daemon (147.75.109.163:42532). May 15 00:03:30.017434 kubelet[2755]: E0515 00:03:30.016856 2755 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:03:30.177170 kubelet[2755]: E0515 00:03:30.177056 2755 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 15 00:03:30.177293 kubelet[2755]: E0515 00:03:30.177182 2755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/685b9bd8-dc86-4917-b70f-ffd0bd118bff-clustermesh-secrets podName:685b9bd8-dc86-4917-b70f-ffd0bd118bff nodeName:}" failed. No retries permitted until 2025-05-15 00:03:30.677155006 +0000 UTC m=+340.968175716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/685b9bd8-dc86-4917-b70f-ffd0bd118bff-clustermesh-secrets") pod "cilium-j98j5" (UID: "685b9bd8-dc86-4917-b70f-ffd0bd118bff") : failed to sync secret cache: timed out waiting for the condition May 15 00:03:30.351441 sshd[4536]: Accepted publickey for core from 147.75.109.163 port 42532 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:30.353534 sshd-session[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:30.359464 systemd-logind[1468]: New session 22 of user core. May 15 00:03:30.361702 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:03:30.792919 containerd[1488]: time="2025-05-15T00:03:30.792777658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j98j5,Uid:685b9bd8-dc86-4917-b70f-ffd0bd118bff,Namespace:kube-system,Attempt:0,}" May 15 00:03:30.813369 containerd[1488]: time="2025-05-15T00:03:30.812999570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:03:30.813369 containerd[1488]: time="2025-05-15T00:03:30.813058251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:03:30.813369 containerd[1488]: time="2025-05-15T00:03:30.813073251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:03:30.813369 containerd[1488]: time="2025-05-15T00:03:30.813155413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:03:30.844718 systemd[1]: Started cri-containerd-d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc.scope - libcontainer container d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc. May 15 00:03:30.870781 containerd[1488]: time="2025-05-15T00:03:30.870665935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j98j5,Uid:685b9bd8-dc86-4917-b70f-ffd0bd118bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\"" May 15 00:03:30.875681 containerd[1488]: time="2025-05-15T00:03:30.875456299Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:03:30.893240 containerd[1488]: time="2025-05-15T00:03:30.893053086Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd\"" May 15 00:03:30.894291 containerd[1488]: time="2025-05-15T00:03:30.893841819Z" level=info msg="StartContainer for \"0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd\"" May 15 00:03:30.925813 systemd[1]: Started cri-containerd-0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd.scope - libcontainer container 0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd. May 15 00:03:30.956885 containerd[1488]: time="2025-05-15T00:03:30.956817397Z" level=info msg="StartContainer for \"0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd\" returns successfully" May 15 00:03:30.968246 systemd[1]: cri-containerd-0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd.scope: Deactivated successfully. May 15 00:03:31.009993 containerd[1488]: time="2025-05-15T00:03:31.009823121Z" level=info msg="shim disconnected" id=0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd namespace=k8s.io May 15 00:03:31.009993 containerd[1488]: time="2025-05-15T00:03:31.009931483Z" level=warning msg="cleaning up after shim disconnected" id=0327fcbbd61da05b8c7b9095dbda3f03ae39cb2c412b556c5a4d5992e36263cd namespace=k8s.io May 15 00:03:31.009993 containerd[1488]: time="2025-05-15T00:03:31.009951563Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:31.047575 sshd[4540]: Connection closed by 147.75.109.163 port 42532 May 15 00:03:31.048240 sshd-session[4536]: pam_unix(sshd:session): session closed for user core May 15 00:03:31.052815 systemd[1]: sshd@22-91.99.8.230:22-147.75.109.163:42532.service: Deactivated successfully. May 15 00:03:31.055185 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:03:31.057248 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. May 15 00:03:31.058831 systemd-logind[1468]: Removed session 22. May 15 00:03:31.222040 systemd[1]: Started sshd@23-91.99.8.230:22-147.75.109.163:42536.service - OpenSSH per-connection server daemon (147.75.109.163:42536). May 15 00:03:31.789234 containerd[1488]: time="2025-05-15T00:03:31.788763953Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:03:31.809603 containerd[1488]: time="2025-05-15T00:03:31.809552595Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e\"" May 15 00:03:31.810843 containerd[1488]: time="2025-05-15T00:03:31.810802977Z" level=info msg="StartContainer for \"aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e\"" May 15 00:03:31.841643 systemd[1]: Started cri-containerd-aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e.scope - libcontainer container aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e. May 15 00:03:31.881776 containerd[1488]: time="2025-05-15T00:03:31.880506233Z" level=info msg="StartContainer for \"aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e\" returns successfully" May 15 00:03:31.892525 systemd[1]: cri-containerd-aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e.scope: Deactivated successfully. May 15 00:03:31.926944 containerd[1488]: time="2025-05-15T00:03:31.926859802Z" level=info msg="shim disconnected" id=aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e namespace=k8s.io May 15 00:03:31.926944 containerd[1488]: time="2025-05-15T00:03:31.926911523Z" level=warning msg="cleaning up after shim disconnected" id=aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e namespace=k8s.io May 15 00:03:31.926944 containerd[1488]: time="2025-05-15T00:03:31.926920083Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:32.208641 sshd[4652]: Accepted publickey for core from 147.75.109.163 port 42536 ssh2: RSA SHA256:hNRz09BfEKlStD/8HYOmV7dO0A2+98XIluCq5GAh/vY May 15 00:03:32.210560 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:32.218270 systemd-logind[1468]: New session 23 of user core. May 15 00:03:32.222634 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:03:32.694417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa9ecbab10703d6a02cc30e8930dfb754d163444c583011b4ad7135137547b1e-rootfs.mount: Deactivated successfully. May 15 00:03:32.798022 containerd[1488]: time="2025-05-15T00:03:32.797978335Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:03:32.822831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511346262.mount: Deactivated successfully. May 15 00:03:32.824602 containerd[1488]: time="2025-05-15T00:03:32.824471438Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8\"" May 15 00:03:32.825633 containerd[1488]: time="2025-05-15T00:03:32.825590337Z" level=info msg="StartContainer for \"d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8\"" May 15 00:03:32.875609 systemd[1]: Started cri-containerd-d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8.scope - libcontainer container d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8. May 15 00:03:32.931340 containerd[1488]: time="2025-05-15T00:03:32.931268663Z" level=info msg="StartContainer for \"d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8\" returns successfully" May 15 00:03:32.934611 systemd[1]: cri-containerd-d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8.scope: Deactivated successfully. May 15 00:03:32.964101 containerd[1488]: time="2025-05-15T00:03:32.963534147Z" level=info msg="shim disconnected" id=d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8 namespace=k8s.io May 15 00:03:32.964101 containerd[1488]: time="2025-05-15T00:03:32.963582667Z" level=warning msg="cleaning up after shim disconnected" id=d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8 namespace=k8s.io May 15 00:03:32.964101 containerd[1488]: time="2025-05-15T00:03:32.963590828Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:33.694071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3eac3fe7ed7ab13699d96fbe9b6363b8ba2a373f0b25f85c8ba0098657142a8-rootfs.mount: Deactivated successfully. May 15 00:03:33.802373 containerd[1488]: time="2025-05-15T00:03:33.802191727Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:03:33.817609 containerd[1488]: time="2025-05-15T00:03:33.817471914Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6\"" May 15 00:03:33.821459 containerd[1488]: time="2025-05-15T00:03:33.819331347Z" level=info msg="StartContainer for \"de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6\"" May 15 00:03:33.857596 systemd[1]: Started cri-containerd-de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6.scope - libcontainer container de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6. May 15 00:03:33.886882 systemd[1]: cri-containerd-de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6.scope: Deactivated successfully. May 15 00:03:33.890258 containerd[1488]: time="2025-05-15T00:03:33.890212706Z" level=info msg="StartContainer for \"de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6\" returns successfully" May 15 00:03:33.912360 containerd[1488]: time="2025-05-15T00:03:33.912256771Z" level=info msg="shim disconnected" id=de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6 namespace=k8s.io May 15 00:03:33.912360 containerd[1488]: time="2025-05-15T00:03:33.912367653Z" level=warning msg="cleaning up after shim disconnected" id=de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6 namespace=k8s.io May 15 00:03:33.913621 containerd[1488]: time="2025-05-15T00:03:33.912377533Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:03:34.693652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de88bd03e4a002d904573e99363abb80f2f992d838a3e43446f9734118f0aaf6-rootfs.mount: Deactivated successfully. May 15 00:03:34.808482 containerd[1488]: time="2025-05-15T00:03:34.808302168Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:03:34.831314 containerd[1488]: time="2025-05-15T00:03:34.831184889Z" level=info msg="CreateContainer within sandbox \"d0cc1ea27e4f7bf785158c2265d063b5a4e66e2ca7426530df2aa38ca9b2c5fc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59\"" May 15 00:03:34.833062 containerd[1488]: time="2025-05-15T00:03:34.832913119Z" level=info msg="StartContainer for \"0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59\"" May 15 00:03:34.877592 systemd[1]: Started cri-containerd-0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59.scope - libcontainer container 0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59. May 15 00:03:34.919102 containerd[1488]: time="2025-05-15T00:03:34.918975945Z" level=info msg="StartContainer for \"0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59\" returns successfully" May 15 00:03:35.252444 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 00:03:35.693458 systemd[1]: run-containerd-runc-k8s.io-0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59-runc.vHiY3o.mount: Deactivated successfully. May 15 00:03:35.835354 kubelet[2755]: I0515 00:03:35.835266 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j98j5" podStartSLOduration=7.835242991 podStartE2EDuration="7.835242991s" podCreationTimestamp="2025-05-15 00:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:03:35.834905105 +0000 UTC m=+346.125925895" watchObservedRunningTime="2025-05-15 00:03:35.835242991 +0000 UTC m=+346.126263701" May 15 00:03:38.188326 systemd-networkd[1393]: lxc_health: Link UP May 15 00:03:38.192533 systemd-networkd[1393]: lxc_health: Gained carrier May 15 00:03:39.221369 systemd[1]: run-containerd-runc-k8s.io-0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59-runc.Sb07yS.mount: Deactivated successfully. May 15 00:03:39.697663 systemd-networkd[1393]: lxc_health: Gained IPv6LL May 15 00:03:41.381962 systemd[1]: run-containerd-runc-k8s.io-0bae35194f62a2dca4aaabe5d4b00b72ab73dca5779d5a4eee50a3086e526f59-runc.QYPJ0A.mount: Deactivated successfully. May 15 00:03:43.752239 sshd[4713]: Connection closed by 147.75.109.163 port 42536 May 15 00:03:43.752711 sshd-session[4652]: pam_unix(sshd:session): session closed for user core May 15 00:03:43.756665 systemd[1]: sshd@23-91.99.8.230:22-147.75.109.163:42536.service: Deactivated successfully. May 15 00:03:43.759859 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:03:43.762048 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. May 15 00:03:43.763193 systemd-logind[1468]: Removed session 23.