May 17 00:16:35.890117 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:16:35.890143 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:16:35.890153 kernel: KASLR enabled May 17 00:16:35.890159 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:16:35.890165 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:16:35.890171 kernel: random: crng init done May 17 00:16:35.890178 kernel: ACPI: Early table checksum verification disabled May 17 00:16:35.890184 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:16:35.890194 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:16:35.890203 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890210 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890217 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890224 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890230 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890238 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890247 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890254 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890261 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:16:35.890267 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:16:35.890274 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:16:35.890281 kernel: NUMA: Failed to initialise from firmware May 17 00:16:35.890287 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:16:35.890294 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 17 00:16:35.890300 kernel: Zone ranges: May 17 00:16:35.890307 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:16:35.890314 kernel: DMA32 empty May 17 00:16:35.890321 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:16:35.890327 kernel: Movable zone start for each node May 17 00:16:35.890334 kernel: Early memory node ranges May 17 00:16:35.890340 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:16:35.890347 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:16:35.890353 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:16:35.890360 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:16:35.890366 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:16:35.890374 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:16:35.890380 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:16:35.890387 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:16:35.890394 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:16:35.890401 kernel: psci: probing for conduit method from ACPI. May 17 00:16:35.890408 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:16:35.890417 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:16:35.890424 kernel: psci: Trusted OS migration not required May 17 00:16:35.890431 kernel: psci: SMC Calling Convention v1.1 May 17 00:16:35.890439 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:16:35.890446 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:16:35.890453 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:16:35.890460 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:16:35.890466 kernel: Detected PIPT I-cache on CPU0 May 17 00:16:35.890473 kernel: CPU features: detected: GIC system register CPU interface May 17 00:16:35.890480 kernel: CPU features: detected: Hardware dirty bit management May 17 00:16:35.890487 kernel: CPU features: detected: Spectre-v4 May 17 00:16:35.890493 kernel: CPU features: detected: Spectre-BHB May 17 00:16:35.890500 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:16:35.890508 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:16:35.890515 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:16:35.890522 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:16:35.890529 kernel: alternatives: applying boot alternatives May 17 00:16:35.890537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:16:35.890544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:16:35.890551 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:16:35.890558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:16:35.890565 kernel: Fallback order for Node 0: 0 May 17 00:16:35.890571 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:16:35.890578 kernel: Policy zone: Normal May 17 00:16:35.890586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:16:35.890593 kernel: software IO TLB: area num 2. May 17 00:16:35.890600 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:16:35.890608 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) May 17 00:16:35.890615 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:16:35.890621 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:16:35.890629 kernel: rcu: RCU event tracing is enabled. May 17 00:16:35.890636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:16:35.890643 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:16:35.890650 kernel: Tracing variant of Tasks RCU enabled. May 17 00:16:35.890657 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:16:35.890666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:16:35.890723 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:16:35.890732 kernel: GICv3: 256 SPIs implemented May 17 00:16:35.890739 kernel: GICv3: 0 Extended SPIs implemented May 17 00:16:35.890745 kernel: Root IRQ handler: gic_handle_irq May 17 00:16:35.890752 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:16:35.890759 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:16:35.890766 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:16:35.890773 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:16:35.890781 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:16:35.890787 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:16:35.890794 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:16:35.890803 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:16:35.890811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:16:35.890817 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:16:35.890825 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:16:35.890831 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:16:35.890838 kernel: Console: colour dummy device 80x25 May 17 00:16:35.890845 kernel: ACPI: Core revision 20230628 May 17 00:16:35.890893 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:16:35.890904 kernel: pid_max: default: 32768 minimum: 301 May 17 00:16:35.890912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:16:35.890922 kernel: landlock: Up and running. May 17 00:16:35.890929 kernel: SELinux: Initializing. May 17 00:16:35.890936 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:16:35.890943 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:16:35.890950 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:16:35.890957 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:16:35.890965 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:16:35.890972 kernel: rcu: Hierarchical SRCU implementation. May 17 00:16:35.890979 kernel: rcu: Max phase no-delay instances is 400. May 17 00:16:35.890987 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:16:35.890994 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:16:35.891001 kernel: Remapping and enabling EFI services. May 17 00:16:35.891011 kernel: smp: Bringing up secondary CPUs ... May 17 00:16:35.891019 kernel: Detected PIPT I-cache on CPU1 May 17 00:16:35.891026 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:16:35.891033 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:16:35.891040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:16:35.891047 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:16:35.891056 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:16:35.891063 kernel: SMP: Total of 2 processors activated. May 17 00:16:35.891070 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:16:35.891082 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:16:35.891091 kernel: CPU features: detected: Common not Private translations May 17 00:16:35.891099 kernel: CPU features: detected: CRC32 instructions May 17 00:16:35.891106 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:16:35.891114 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:16:35.891121 kernel: CPU features: detected: LSE atomic instructions May 17 00:16:35.891129 kernel: CPU features: detected: Privileged Access Never May 17 00:16:35.891136 kernel: CPU features: detected: RAS Extension Support May 17 00:16:35.891146 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:16:35.891153 kernel: CPU: All CPU(s) started at EL1 May 17 00:16:35.891161 kernel: alternatives: applying system-wide alternatives May 17 00:16:35.891168 kernel: devtmpfs: initialized May 17 00:16:35.891175 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:16:35.891183 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:16:35.891192 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:16:35.891200 kernel: SMBIOS 3.0.0 present. May 17 00:16:35.891207 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:16:35.891215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:16:35.891222 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:16:35.891230 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:16:35.891237 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:16:35.891245 kernel: audit: initializing netlink subsys (disabled) May 17 00:16:35.891252 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 May 17 00:16:35.891262 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:16:35.891269 kernel: cpuidle: using governor menu May 17 00:16:35.891277 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:16:35.891284 kernel: ASID allocator initialised with 32768 entries May 17 00:16:35.891292 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:16:35.891300 kernel: Serial: AMBA PL011 UART driver May 17 00:16:35.891308 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:16:35.891316 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:16:35.891323 kernel: Modules: 509024 pages in range for PLT usage May 17 00:16:35.891332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:16:35.891340 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:16:35.891347 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:16:35.891355 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:16:35.891363 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:16:35.891370 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:16:35.891378 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:16:35.891386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:16:35.891393 kernel: ACPI: Added _OSI(Module Device) May 17 00:16:35.891402 kernel: ACPI: Added _OSI(Processor Device) May 17 00:16:35.891409 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:16:35.891417 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:16:35.891424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:16:35.891432 kernel: ACPI: Interpreter enabled May 17 00:16:35.891439 kernel: ACPI: Using GIC for interrupt routing May 17 00:16:35.891447 kernel: ACPI: MCFG table detected, 1 entries May 17 00:16:35.891454 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:16:35.891462 kernel: printk: console [ttyAMA0] enabled May 17 00:16:35.891471 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:16:35.891625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:16:35.891725 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:16:35.891793 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:16:35.891871 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:16:35.891939 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:16:35.891949 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:16:35.891961 kernel: PCI host bridge to bus 0000:00 May 17 00:16:35.892035 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:16:35.892095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:16:35.892153 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:16:35.892211 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:16:35.892292 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:16:35.892370 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:16:35.892443 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:16:35.892510 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:16:35.892592 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.892660 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:16:35.892791 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.892899 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:16:35.892997 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893067 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:16:35.893143 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893214 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:16:35.893289 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893358 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:16:35.893438 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:16:35.893595 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893664 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:16:35.893761 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.893834 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:16:35.893927 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:16:35.894000 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:16:35.894082 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:16:35.894152 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:16:35.894232 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:16:35.894307 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:16:35.894379 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:16:35.894459 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:16:35.894544 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:16:35.894622 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:16:35.894722 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:16:35.894800 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:16:35.894884 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:16:35.894971 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:16:35.895048 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:16:35.895127 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:16:35.895198 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:16:35.895266 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:16:35.895342 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:16:35.895412 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:16:35.895485 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:16:35.895564 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:16:35.895636 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:16:35.895771 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:16:35.895847 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:16:35.895970 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:16:35.896049 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:16:35.896117 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:16:35.896187 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:16:35.896253 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:16:35.896319 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:16:35.896388 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:16:35.896462 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:16:35.896540 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:16:35.896621 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:16:35.896796 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:16:35.896887 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:16:35.896961 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:16:35.897027 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:16:35.897092 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:16:35.897159 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:16:35.897233 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:16:35.897299 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:16:35.897368 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:16:35.897435 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:16:35.897510 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:16:35.897590 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:16:35.897668 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:16:35.897765 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:16:35.897840 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:16:35.898206 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:16:35.898290 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:16:35.898364 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:16:35.898433 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:16:35.898502 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:16:35.898576 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:16:35.898645 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:16:35.898811 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:16:35.898909 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:16:35.898983 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:16:35.899058 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:16:35.899128 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:16:35.899206 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:16:35.899280 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:16:35.899351 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:16:35.899422 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:16:35.899495 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:16:35.899567 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:16:35.899637 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:16:35.899987 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:16:35.900087 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:16:35.900156 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:16:35.900224 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:16:35.900292 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:16:35.900362 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:16:35.900429 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:16:35.900502 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:16:35.900569 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:16:35.900637 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:16:35.901297 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:16:35.901388 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:16:35.901458 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:16:35.901529 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:16:35.901599 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:16:35.901689 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:16:35.901768 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:16:35.901839 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:16:35.901932 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:16:35.902005 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:16:35.902077 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:16:35.902150 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:16:35.902225 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:16:35.902296 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:16:35.902372 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:16:35.902440 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:16:35.902505 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:16:35.902571 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:16:35.902637 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:16:35.902727 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:16:35.902804 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:16:35.902883 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:16:35.902953 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:16:35.903021 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:16:35.903104 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:16:35.903181 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:16:35.903251 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:16:35.903321 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:16:35.903390 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:16:35.903458 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:16:35.903534 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:16:35.903605 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:16:35.903689 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:16:35.903760 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:16:35.903832 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:16:35.903962 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:16:35.904041 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:16:35.904120 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:16:35.904189 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:16:35.904258 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:16:35.904328 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:16:35.904407 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:16:35.904479 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:16:35.904547 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:16:35.904617 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:16:35.904715 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:16:35.904794 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:16:35.904888 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:16:35.904976 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:16:35.905054 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:16:35.905124 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:16:35.905202 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:16:35.905273 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:16:35.905342 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:16:35.905414 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:16:35.905484 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:16:35.905553 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:16:35.905626 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:16:35.907806 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:16:35.907948 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:16:35.908025 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:16:35.908093 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:16:35.908165 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:16:35.908226 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:16:35.908290 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:16:35.908380 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:16:35.908445 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:16:35.908507 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:16:35.908577 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:16:35.908642 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:16:35.910783 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:16:35.910936 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:16:35.911018 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:16:35.911102 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:16:35.911179 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:16:35.911244 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:16:35.911309 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:16:35.911381 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:16:35.911449 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:16:35.911512 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:16:35.911581 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:16:35.911643 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:16:35.911741 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:16:35.911821 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:16:35.911900 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:16:35.911967 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:16:35.912040 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:16:35.912105 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:16:35.912169 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:16:35.912250 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:16:35.912315 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:16:35.912378 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:16:35.912388 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:16:35.912396 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:16:35.912404 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:16:35.912413 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:16:35.912420 kernel: iommu: Default domain type: Translated May 17 00:16:35.912431 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:16:35.912440 kernel: efivars: Registered efivars operations May 17 00:16:35.912448 kernel: vgaarb: loaded May 17 00:16:35.912456 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:16:35.912464 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:16:35.912473 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:16:35.912481 kernel: pnp: PnP ACPI init May 17 00:16:35.912558 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:16:35.912572 kernel: pnp: PnP ACPI: found 1 devices May 17 00:16:35.912580 kernel: NET: Registered PF_INET protocol family May 17 00:16:35.912589 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:16:35.912597 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:16:35.912605 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:16:35.912613 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:16:35.912621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:16:35.912629 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:16:35.912638 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:16:35.912648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:16:35.912656 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:16:35.912940 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:16:35.912958 kernel: PCI: CLS 0 bytes, default 64 May 17 00:16:35.912967 kernel: kvm [1]: HYP mode not available May 17 00:16:35.912975 kernel: Initialise system trusted keyrings May 17 00:16:35.912984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:16:35.912992 kernel: Key type asymmetric registered May 17 00:16:35.913000 kernel: Asymmetric key parser 'x509' registered May 17 00:16:35.913012 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:16:35.913019 kernel: io scheduler mq-deadline registered May 17 00:16:35.913027 kernel: io scheduler kyber registered May 17 00:16:35.913035 kernel: io scheduler bfq registered May 17 00:16:35.913044 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:16:35.913127 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:16:35.913199 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:16:35.913268 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.913345 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:16:35.913414 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:16:35.913484 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.913557 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:16:35.913627 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:16:35.913709 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.913789 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:16:35.913902 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:16:35.915804 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.915919 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:16:35.915996 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:16:35.916066 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.916149 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:16:35.916219 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:16:35.916289 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.916360 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:16:35.916427 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:16:35.916494 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.916575 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:16:35.916646 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:16:35.916737 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.916751 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:16:35.916825 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:16:35.916941 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:16:35.917023 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:16:35.917034 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:16:35.917043 kernel: ACPI: button: Power Button [PWRB] May 17 00:16:35.917051 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:16:35.917128 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:16:35.917205 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:16:35.917217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:16:35.917228 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:16:35.917307 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:16:35.917318 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:16:35.917326 kernel: thunder_xcv, ver 1.0 May 17 00:16:35.917334 kernel: thunder_bgx, ver 1.0 May 17 00:16:35.917342 kernel: nicpf, ver 1.0 May 17 00:16:35.917350 kernel: nicvf, ver 1.0 May 17 00:16:35.917435 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:16:35.917503 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:16:35 UTC (1747440995) May 17 00:16:35.917515 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:16:35.917524 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:16:35.917532 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:16:35.917540 kernel: watchdog: Hard watchdog permanently disabled May 17 00:16:35.917548 kernel: NET: Registered PF_INET6 protocol family May 17 00:16:35.917556 kernel: Segment Routing with IPv6 May 17 00:16:35.917564 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:16:35.917572 kernel: NET: Registered PF_PACKET protocol family May 17 00:16:35.917580 kernel: Key type dns_resolver registered May 17 00:16:35.917590 kernel: registered taskstats version 1 May 17 00:16:35.917598 kernel: Loading compiled-in X.509 certificates May 17 00:16:35.917606 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:16:35.917614 kernel: Key type .fscrypt registered May 17 00:16:35.917621 kernel: Key type fscrypt-provisioning registered May 17 00:16:35.917629 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:16:35.917638 kernel: ima: Allocated hash algorithm: sha1 May 17 00:16:35.917645 kernel: ima: No architecture policies found May 17 00:16:35.917656 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:16:35.917667 kernel: clk: Disabling unused clocks May 17 00:16:35.917687 kernel: Freeing unused kernel memory: 39424K May 17 00:16:35.917696 kernel: Run /init as init process May 17 00:16:35.917704 kernel: with arguments: May 17 00:16:35.917713 kernel: /init May 17 00:16:35.917721 kernel: with environment: May 17 00:16:35.917728 kernel: HOME=/ May 17 00:16:35.917736 kernel: TERM=linux May 17 00:16:35.917744 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:16:35.917759 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:16:35.917771 systemd[1]: Detected virtualization kvm. May 17 00:16:35.917781 systemd[1]: Detected architecture arm64. May 17 00:16:35.917790 systemd[1]: Running in initrd. May 17 00:16:35.917800 systemd[1]: No hostname configured, using default hostname. May 17 00:16:35.917809 systemd[1]: Hostname set to . May 17 00:16:35.917818 systemd[1]: Initializing machine ID from VM UUID. May 17 00:16:35.917828 systemd[1]: Queued start job for default target initrd.target. May 17 00:16:35.917837 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:35.917848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:35.917868 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:16:35.917877 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:16:35.917885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:16:35.917894 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:16:35.917907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:16:35.917916 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:16:35.917925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:35.917934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:35.917945 systemd[1]: Reached target paths.target - Path Units. May 17 00:16:35.917954 systemd[1]: Reached target slices.target - Slice Units. May 17 00:16:35.917962 systemd[1]: Reached target swap.target - Swaps. May 17 00:16:35.917971 systemd[1]: Reached target timers.target - Timer Units. May 17 00:16:35.917982 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:16:35.917991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:16:35.917999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:16:35.918008 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:16:35.918017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:35.918029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:16:35.918038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:35.918047 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:16:35.918055 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:16:35.918066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:16:35.918075 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:16:35.918083 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:16:35.918092 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:16:35.918101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:16:35.918109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:35.918118 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:16:35.918150 systemd-journald[236]: Collecting audit messages is disabled. May 17 00:16:35.918174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:35.918182 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:16:35.918194 systemd-journald[236]: Journal started May 17 00:16:35.918214 systemd-journald[236]: Runtime Journal (/run/log/journal/42a3b08451b94bb888a83565ba0d5dc5) is 8.0M, max 76.6M, 68.6M free. May 17 00:16:35.919913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:16:35.920774 systemd-modules-load[237]: Inserted module 'overlay' May 17 00:16:35.922977 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:16:35.934722 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:16:35.936549 systemd-modules-load[237]: Inserted module 'br_netfilter' May 17 00:16:35.939797 kernel: Bridge firewalling registered May 17 00:16:35.937363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:35.938931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:16:35.946988 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:35.948907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:16:35.953107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:16:35.955035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:16:35.960842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:16:35.972768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:35.982897 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:35.983946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:35.985733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:35.993103 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:16:35.995907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:16:36.009134 dracut-cmdline[269]: dracut-dracut-053 May 17 00:16:36.013774 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:16:36.036108 systemd-resolved[271]: Positive Trust Anchors: May 17 00:16:36.036126 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:16:36.036158 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:16:36.042170 systemd-resolved[271]: Defaulting to hostname 'linux'. May 17 00:16:36.043335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:16:36.048652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:36.123742 kernel: SCSI subsystem initialized May 17 00:16:36.128723 kernel: Loading iSCSI transport class v2.0-870. May 17 00:16:36.138715 kernel: iscsi: registered transport (tcp) May 17 00:16:36.153720 kernel: iscsi: registered transport (qla4xxx) May 17 00:16:36.153813 kernel: QLogic iSCSI HBA Driver May 17 00:16:36.203417 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:16:36.209907 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:16:36.231744 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:16:36.231843 kernel: device-mapper: uevent: version 1.0.3 May 17 00:16:36.231896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:16:36.285774 kernel: raid6: neonx8 gen() 15572 MB/s May 17 00:16:36.302722 kernel: raid6: neonx4 gen() 15483 MB/s May 17 00:16:36.319727 kernel: raid6: neonx2 gen() 13123 MB/s May 17 00:16:36.336729 kernel: raid6: neonx1 gen() 10365 MB/s May 17 00:16:36.353752 kernel: raid6: int64x8 gen() 6919 MB/s May 17 00:16:36.370709 kernel: raid6: int64x4 gen() 7299 MB/s May 17 00:16:36.387732 kernel: raid6: int64x2 gen() 6089 MB/s May 17 00:16:36.404718 kernel: raid6: int64x1 gen() 4989 MB/s May 17 00:16:36.404814 kernel: raid6: using algorithm neonx8 gen() 15572 MB/s May 17 00:16:36.421748 kernel: raid6: .... xor() 11782 MB/s, rmw enabled May 17 00:16:36.421822 kernel: raid6: using neon recovery algorithm May 17 00:16:36.426898 kernel: xor: measuring software checksum speed May 17 00:16:36.426976 kernel: 8regs : 19769 MB/sec May 17 00:16:36.426994 kernel: 32regs : 19655 MB/sec May 17 00:16:36.427011 kernel: arm64_neon : 26954 MB/sec May 17 00:16:36.427723 kernel: xor: using function: arm64_neon (26954 MB/sec) May 17 00:16:36.478725 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:16:36.492339 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:16:36.499017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:36.513138 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 17 00:16:36.516652 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:36.525833 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:16:36.543439 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 17 00:16:36.581547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:16:36.587932 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:16:36.643770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:36.655606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:16:36.680816 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:16:36.682667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:16:36.684370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:36.685051 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:16:36.693212 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:16:36.714773 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:16:36.765036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:16:36.769113 kernel: scsi host0: Virtio SCSI HBA May 17 00:16:36.769322 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:16:36.769352 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:16:36.766936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:36.771807 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:36.774471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:16:36.774550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:36.775816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:36.784993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:36.787765 kernel: ACPI: bus type USB registered May 17 00:16:36.789687 kernel: usbcore: registered new interface driver usbfs May 17 00:16:36.789839 kernel: usbcore: registered new interface driver hub May 17 00:16:36.790040 kernel: usbcore: registered new device driver usb May 17 00:16:36.802181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:36.808067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:16:36.817727 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:16:36.819697 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:16:36.819940 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:16:36.819954 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:16:36.829011 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:16:36.830910 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:16:36.831099 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:16:36.831188 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:16:36.831280 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:16:36.837698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:16:36.837747 kernel: GPT:17805311 != 80003071 May 17 00:16:36.837759 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:16:36.837769 kernel: GPT:17805311 != 80003071 May 17 00:16:36.837779 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:16:36.839713 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:16:36.840720 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:16:36.845423 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:36.848721 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:16:36.850701 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:16:36.850979 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:16:36.852972 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:16:36.853197 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:16:36.853290 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:16:36.853374 kernel: hub 1-0:1.0: USB hub found May 17 00:16:36.853979 kernel: hub 1-0:1.0: 4 ports detected May 17 00:16:36.855952 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:16:36.856144 kernel: hub 2-0:1.0: USB hub found May 17 00:16:36.856242 kernel: hub 2-0:1.0: 4 ports detected May 17 00:16:36.895699 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (509) May 17 00:16:36.899707 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (508) May 17 00:16:36.903370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:16:36.911245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:16:36.919933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:16:36.929437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:16:36.931324 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:16:36.939955 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:16:36.968788 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:16:36.969804 disk-uuid[572]: Primary Header is updated. May 17 00:16:36.969804 disk-uuid[572]: Secondary Entries is updated. May 17 00:16:36.969804 disk-uuid[572]: Secondary Header is updated. May 17 00:16:37.102742 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:16:37.239513 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:16:37.239649 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:16:37.240280 kernel: usbcore: registered new interface driver usbhid May 17 00:16:37.240323 kernel: usbhid: USB HID core driver May 17 00:16:37.345763 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:16:37.473732 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:16:37.527744 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:16:37.995730 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:16:37.996774 disk-uuid[574]: The operation has completed successfully. May 17 00:16:38.051005 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:16:38.051129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:16:38.062011 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:16:38.075648 sh[591]: Success May 17 00:16:38.088892 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:16:38.148424 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:16:38.162072 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:16:38.163415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:16:38.180877 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:16:38.180951 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:16:38.180973 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:16:38.180992 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:16:38.181711 kernel: BTRFS info (device dm-0): using free space tree May 17 00:16:38.188896 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:16:38.191746 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:16:38.192470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:16:38.198917 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:16:38.201088 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:16:38.217920 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:16:38.217983 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:16:38.218004 kernel: BTRFS info (device sda6): using free space tree May 17 00:16:38.224704 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:16:38.224802 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:16:38.238640 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:16:38.240008 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:16:38.252167 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:16:38.257025 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:16:38.347824 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:16:38.358381 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:16:38.364782 ignition[695]: Ignition 2.19.0 May 17 00:16:38.365363 ignition[695]: Stage: fetch-offline May 17 00:16:38.365414 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:38.365423 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:38.365725 ignition[695]: parsed url from cmdline: "" May 17 00:16:38.365729 ignition[695]: no config URL provided May 17 00:16:38.365736 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:16:38.365748 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 17 00:16:38.365753 ignition[695]: failed to fetch config: resource requires networking May 17 00:16:38.366135 ignition[695]: Ignition finished successfully May 17 00:16:38.369997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:16:38.386930 systemd-networkd[777]: lo: Link UP May 17 00:16:38.386939 systemd-networkd[777]: lo: Gained carrier May 17 00:16:38.388853 systemd-networkd[777]: Enumeration completed May 17 00:16:38.389077 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:16:38.390196 systemd[1]: Reached target network.target - Network. May 17 00:16:38.391619 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:38.391623 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:38.392859 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:38.392863 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:38.393913 systemd-networkd[777]: eth0: Link UP May 17 00:16:38.393917 systemd-networkd[777]: eth0: Gained carrier May 17 00:16:38.393925 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:38.398275 systemd-networkd[777]: eth1: Link UP May 17 00:16:38.398283 systemd-networkd[777]: eth1: Gained carrier May 17 00:16:38.398299 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:38.403638 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:16:38.427322 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:16:38.429518 ignition[780]: Ignition 2.19.0 May 17 00:16:38.429532 ignition[780]: Stage: fetch May 17 00:16:38.429768 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:38.429784 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:38.429916 ignition[780]: parsed url from cmdline: "" May 17 00:16:38.429920 ignition[780]: no config URL provided May 17 00:16:38.429925 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:16:38.429938 ignition[780]: no config at "/usr/lib/ignition/user.ign" May 17 00:16:38.429963 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:16:38.430666 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:16:38.452783 systemd-networkd[777]: eth0: DHCPv4 address 138.199.238.255/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:16:38.630904 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:16:38.637927 ignition[780]: GET result: OK May 17 00:16:38.638088 ignition[780]: parsing config with SHA512: 2d8729384880c8d5d1664e8d8b47759dae4abeecb79ca3276d709a5a0f38e31dc1b2ebb9b2dab4ef2b776e8c930d42ebb0f6c559e08b21f2d21f638f5308a497 May 17 00:16:38.644387 unknown[780]: fetched base config from "system" May 17 00:16:38.644398 unknown[780]: fetched base config from "system" May 17 00:16:38.645210 ignition[780]: fetch: fetch complete May 17 00:16:38.644402 unknown[780]: fetched user config from "hetzner" May 17 00:16:38.645216 ignition[780]: fetch: fetch passed May 17 00:16:38.647148 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:16:38.645272 ignition[780]: Ignition finished successfully May 17 00:16:38.659128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:16:38.673042 ignition[787]: Ignition 2.19.0 May 17 00:16:38.673052 ignition[787]: Stage: kargs May 17 00:16:38.673236 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:38.673246 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:38.674270 ignition[787]: kargs: kargs passed May 17 00:16:38.674328 ignition[787]: Ignition finished successfully May 17 00:16:38.676387 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:16:38.682919 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:16:38.696964 ignition[793]: Ignition 2.19.0 May 17 00:16:38.696976 ignition[793]: Stage: disks May 17 00:16:38.697285 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 17 00:16:38.697298 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:38.698499 ignition[793]: disks: disks passed May 17 00:16:38.698560 ignition[793]: Ignition finished successfully May 17 00:16:38.700006 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:16:38.701197 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:16:38.701901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:16:38.702532 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:16:38.703607 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:16:38.704722 systemd[1]: Reached target basic.target - Basic System. May 17 00:16:38.711952 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:16:38.728342 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:16:38.733309 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:16:38.738925 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:16:38.788774 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:16:38.789248 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:16:38.791772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:16:38.801906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:16:38.806497 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:16:38.810254 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:16:38.811311 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:16:38.811361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:16:38.820303 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (810) May 17 00:16:38.820346 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:16:38.820918 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:16:38.821690 kernel: BTRFS info (device sda6): using free space tree May 17 00:16:38.827552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:16:38.830715 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:16:38.830752 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:16:38.839885 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:16:38.846126 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:16:38.893713 coreos-metadata[812]: May 17 00:16:38.893 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:16:38.896105 coreos-metadata[812]: May 17 00:16:38.895 INFO Fetch successful May 17 00:16:38.898525 coreos-metadata[812]: May 17 00:16:38.897 INFO wrote hostname ci-4081-3-3-n-0eec03f1fd to /sysroot/etc/hostname May 17 00:16:38.899402 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:16:38.903829 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:16:38.907238 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 17 00:16:38.913437 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:16:38.918908 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:16:39.015068 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:16:39.021931 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:16:39.027616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:16:39.031697 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:16:39.059688 ignition[926]: INFO : Ignition 2.19.0 May 17 00:16:39.059688 ignition[926]: INFO : Stage: mount May 17 00:16:39.059688 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:39.059688 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:39.062945 ignition[926]: INFO : mount: mount passed May 17 00:16:39.062945 ignition[926]: INFO : Ignition finished successfully May 17 00:16:39.063115 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:16:39.064089 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:16:39.068867 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:16:39.181437 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:16:39.189924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:16:39.204747 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (938) May 17 00:16:39.206737 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:16:39.206863 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:16:39.206886 kernel: BTRFS info (device sda6): using free space tree May 17 00:16:39.211006 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:16:39.211067 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:16:39.214507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:16:39.237975 ignition[955]: INFO : Ignition 2.19.0 May 17 00:16:39.238710 ignition[955]: INFO : Stage: files May 17 00:16:39.239395 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:39.240810 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:39.242108 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 17 00:16:39.244027 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:16:39.244027 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:16:39.247608 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:16:39.247608 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:16:39.250109 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:16:39.250109 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:16:39.250109 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:16:39.247899 unknown[955]: wrote ssh authorized keys file for user: core May 17 00:16:39.358907 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:16:39.507476 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:16:39.507476 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:16:39.509787 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:16:39.510787 systemd-networkd[777]: eth1: Gained IPv6LL May 17 00:16:40.085509 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:16:40.160266 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:16:40.168663 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:16:40.279351 systemd-networkd[777]: eth0: Gained IPv6LL May 17 00:16:40.724666 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:16:40.897948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:16:40.897948 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:16:40.900966 ignition[955]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:16:40.900966 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:16:40.900966 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:16:40.911290 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:16:40.911290 ignition[955]: INFO : files: files passed May 17 00:16:40.911290 ignition[955]: INFO : Ignition finished successfully May 17 00:16:40.904689 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:16:40.913191 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:16:40.916150 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:16:40.919167 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:16:40.919876 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:16:40.931572 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:40.931572 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:40.934563 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:16:40.937413 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:16:40.938987 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:16:40.944961 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:16:40.996591 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:16:40.996746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:16:40.998625 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:16:40.999357 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:16:41.000758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:16:41.007027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:16:41.021200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:16:41.027979 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:16:41.039222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:41.040643 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:41.042036 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:16:41.042569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:16:41.042722 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:16:41.044719 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:16:41.046212 systemd[1]: Stopped target basic.target - Basic System. May 17 00:16:41.047565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:16:41.048921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:16:41.049956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:16:41.050968 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:16:41.051902 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:16:41.052942 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:16:41.053882 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:16:41.054718 systemd[1]: Stopped target swap.target - Swaps. May 17 00:16:41.055478 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:16:41.055659 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:16:41.056844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:41.057893 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:41.058831 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:16:41.059883 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:41.060596 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:16:41.060743 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:16:41.062519 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:16:41.062638 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:16:41.063738 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:16:41.063864 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:16:41.064956 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:16:41.065055 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:16:41.084524 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:16:41.091108 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:16:41.092303 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:16:41.092468 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:41.094618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:16:41.094762 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:16:41.099766 ignition[1008]: INFO : Ignition 2.19.0 May 17 00:16:41.099766 ignition[1008]: INFO : Stage: umount May 17 00:16:41.099766 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:16:41.099766 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:16:41.102558 ignition[1008]: INFO : umount: umount passed May 17 00:16:41.102558 ignition[1008]: INFO : Ignition finished successfully May 17 00:16:41.101211 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:16:41.101320 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:16:41.107180 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:16:41.107284 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:16:41.112496 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:16:41.112562 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:16:41.113901 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:16:41.113957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:16:41.116089 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:16:41.116144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:16:41.117620 systemd[1]: Stopped target network.target - Network. May 17 00:16:41.118122 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:16:41.118180 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:16:41.122212 systemd[1]: Stopped target paths.target - Path Units. May 17 00:16:41.123648 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:16:41.130252 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:41.133781 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:16:41.134320 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:16:41.137244 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:16:41.137357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:16:41.138434 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:16:41.138507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:16:41.142671 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:16:41.142772 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:16:41.143633 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:16:41.143722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:16:41.144578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:16:41.147184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:16:41.147739 systemd-networkd[777]: eth0: DHCPv6 lease lost May 17 00:16:41.152024 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:16:41.155765 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:16:41.155766 systemd-networkd[777]: eth1: DHCPv6 lease lost May 17 00:16:41.155892 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:16:41.160807 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:16:41.161067 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:16:41.168616 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:16:41.168781 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:41.174933 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:16:41.175428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:16:41.175500 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:16:41.178639 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:16:41.180512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:41.181205 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:16:41.181255 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:16:41.182015 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:16:41.182076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:41.183804 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:41.188646 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:16:41.189025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:16:41.195121 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:16:41.195195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:16:41.204958 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:16:41.206726 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:41.208746 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:16:41.208802 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:16:41.209918 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:16:41.209952 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:41.210982 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:16:41.211031 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:16:41.212453 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:16:41.212498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:16:41.214019 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:16:41.214067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:16:41.229548 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:16:41.230974 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:16:41.231064 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:41.232002 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:16:41.232085 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:16:41.235181 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:16:41.235235 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:41.239012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:16:41.239084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:41.240383 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:16:41.240528 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:16:41.242265 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:16:41.242365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:16:41.244557 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:16:41.250994 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:16:41.262491 systemd[1]: Switching root. May 17 00:16:41.299448 systemd-journald[236]: Journal stopped May 17 00:16:42.220610 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 17 00:16:42.220708 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:16:42.220722 kernel: SELinux: policy capability open_perms=1 May 17 00:16:42.220731 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:16:42.220741 kernel: SELinux: policy capability always_check_network=0 May 17 00:16:42.220751 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:16:42.220765 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:16:42.220775 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:16:42.220785 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:16:42.220797 kernel: audit: type=1403 audit(1747441001.427:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:16:42.220819 systemd[1]: Successfully loaded SELinux policy in 34.908ms. May 17 00:16:42.220852 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.207ms. May 17 00:16:42.220869 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:16:42.220881 systemd[1]: Detected virtualization kvm. May 17 00:16:42.220893 systemd[1]: Detected architecture arm64. May 17 00:16:42.220903 systemd[1]: Detected first boot. May 17 00:16:42.220914 systemd[1]: Hostname set to . May 17 00:16:42.220928 systemd[1]: Initializing machine ID from VM UUID. May 17 00:16:42.220941 zram_generator::config[1051]: No configuration found. May 17 00:16:42.220952 systemd[1]: Populated /etc with preset unit settings. May 17 00:16:42.220962 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:16:42.220973 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:16:42.220983 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:16:42.220995 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:16:42.221006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:16:42.221018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:16:42.221029 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:16:42.221044 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:16:42.221056 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:16:42.221067 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:16:42.221077 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:16:42.221088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:16:42.221100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:16:42.221111 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:16:42.221125 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:16:42.221136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:16:42.221147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:16:42.221157 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:16:42.221168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:16:42.221179 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:16:42.221190 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:16:42.221203 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:16:42.221215 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:16:42.221226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:16:42.221237 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:16:42.221248 systemd[1]: Reached target slices.target - Slice Units. May 17 00:16:42.221259 systemd[1]: Reached target swap.target - Swaps. May 17 00:16:42.221270 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:16:42.221281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:16:42.221294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:16:42.221306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:16:42.221317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:16:42.221327 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:16:42.221338 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:16:42.221348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:16:42.221359 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:16:42.221371 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:16:42.221387 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:16:42.221400 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:16:42.221411 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:16:42.221421 systemd[1]: Reached target machines.target - Containers. May 17 00:16:42.221432 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:16:42.221443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:42.221456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:16:42.221467 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:16:42.221479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:42.221489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:16:42.221500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:42.221511 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:16:42.221522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:42.221533 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:16:42.221547 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:16:42.221561 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:16:42.221574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:16:42.221586 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:16:42.221598 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:16:42.221609 kernel: fuse: init (API version 7.39) May 17 00:16:42.221620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:16:42.221631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:16:42.221642 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:16:42.221653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:16:42.221666 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:16:42.222175 systemd[1]: Stopped verity-setup.service. May 17 00:16:42.222196 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:16:42.222214 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:16:42.222226 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:16:42.222237 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:16:42.222248 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:16:42.222259 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:16:42.222274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:16:42.222285 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:16:42.222295 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:16:42.222306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:42.222316 kernel: loop: module loaded May 17 00:16:42.222329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:42.222340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:42.222351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:42.222364 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:16:42.222375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:16:42.222386 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:42.222399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:42.222409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:16:42.222421 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:16:42.222432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:16:42.222443 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:16:42.222481 systemd-journald[1121]: Collecting audit messages is disabled. May 17 00:16:42.222504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:16:42.222517 systemd-journald[1121]: Journal started May 17 00:16:42.222539 systemd-journald[1121]: Runtime Journal (/run/log/journal/42a3b08451b94bb888a83565ba0d5dc5) is 8.0M, max 76.6M, 68.6M free. May 17 00:16:42.232906 kernel: ACPI: bus type drm_connector registered May 17 00:16:42.232987 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:16:41.934389 systemd[1]: Queued start job for default target multi-user.target. May 17 00:16:41.955767 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:16:41.956239 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:16:42.238363 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:16:42.238424 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:16:42.238440 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:16:42.245730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:16:42.255168 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:16:42.255249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:42.260758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:16:42.264960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:16:42.272345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:16:42.273695 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:16:42.286798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:16:42.286887 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:16:42.297233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:16:42.301070 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:16:42.305725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:16:42.307264 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:16:42.307404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:16:42.309104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:16:42.310722 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:16:42.312143 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:16:42.314726 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:16:42.334305 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:16:42.341017 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:16:42.345938 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:16:42.349973 kernel: loop0: detected capacity change from 0 to 203944 May 17 00:16:42.372298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:16:42.376849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:16:42.379084 systemd-journald[1121]: Time spent on flushing to /var/log/journal/42a3b08451b94bb888a83565ba0d5dc5 is 39.739ms for 1138 entries. May 17 00:16:42.379084 systemd-journald[1121]: System Journal (/var/log/journal/42a3b08451b94bb888a83565ba0d5dc5) is 8.0M, max 584.8M, 576.8M free. May 17 00:16:42.439779 systemd-journald[1121]: Received client request to flush runtime journal. May 17 00:16:42.439952 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:16:42.395972 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:16:42.406901 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:16:42.410433 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:16:42.427118 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. May 17 00:16:42.427133 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. May 17 00:16:42.439361 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:16:42.443405 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:16:42.446750 kernel: loop1: detected capacity change from 0 to 8 May 17 00:16:42.448273 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:16:42.457012 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:16:42.470737 kernel: loop2: detected capacity change from 0 to 114432 May 17 00:16:42.505200 kernel: loop3: detected capacity change from 0 to 114328 May 17 00:16:42.523789 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:16:42.536750 kernel: loop4: detected capacity change from 0 to 203944 May 17 00:16:42.533547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:16:42.554365 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 17 00:16:42.554389 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 17 00:16:42.559227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:16:42.567891 kernel: loop5: detected capacity change from 0 to 8 May 17 00:16:42.570777 kernel: loop6: detected capacity change from 0 to 114432 May 17 00:16:42.584708 kernel: loop7: detected capacity change from 0 to 114328 May 17 00:16:42.606742 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:16:42.607562 (sd-merge)[1191]: Merged extensions into '/usr'. May 17 00:16:42.613374 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:16:42.613822 systemd[1]: Reloading... May 17 00:16:42.706741 zram_generator::config[1218]: No configuration found. May 17 00:16:42.872448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:16:42.911955 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:16:42.922474 systemd[1]: Reloading finished in 307 ms. May 17 00:16:42.945762 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:16:42.950760 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:16:42.960019 systemd[1]: Starting ensure-sysext.service... May 17 00:16:42.966996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:16:42.991965 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... May 17 00:16:42.991987 systemd[1]: Reloading... May 17 00:16:43.006830 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:16:43.008131 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:16:43.014138 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:16:43.014574 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 17 00:16:43.017153 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 17 00:16:43.021203 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:16:43.021351 systemd-tmpfiles[1260]: Skipping /boot May 17 00:16:43.039473 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:16:43.039610 systemd-tmpfiles[1260]: Skipping /boot May 17 00:16:43.086795 zram_generator::config[1288]: No configuration found. May 17 00:16:43.213261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:16:43.259993 systemd[1]: Reloading finished in 267 ms. May 17 00:16:43.282355 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:16:43.285003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:16:43.306022 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:16:43.311763 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:16:43.314353 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:16:43.327042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:16:43.332664 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:16:43.344881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:16:43.353352 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:16:43.356838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:43.362076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:43.368036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:43.373982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:43.374799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:43.380397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:16:43.393715 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:16:43.397471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:43.397647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:43.403091 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:16:43.405631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:43.407114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:43.415326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:43.428099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:43.432669 systemd-udevd[1337]: Using default interface naming scheme 'v255'. May 17 00:16:43.433032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:16:43.433864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:43.434604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:43.435605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:43.439308 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:43.439902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:43.445096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:16:43.449816 systemd[1]: Finished ensure-sysext.service. May 17 00:16:43.461073 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:16:43.463097 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:16:43.466436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:43.467780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:43.471766 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:16:43.474271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:16:43.474331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:16:43.478442 augenrules[1362]: No rules May 17 00:16:43.482168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:16:43.485980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:16:43.496045 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:16:43.497088 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:16:43.497751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:16:43.504656 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:16:43.648833 systemd-networkd[1375]: lo: Link UP May 17 00:16:43.648847 systemd-networkd[1375]: lo: Gained carrier May 17 00:16:43.654015 systemd-networkd[1375]: Enumeration completed May 17 00:16:43.654663 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:16:43.658606 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.658617 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:43.663873 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.663885 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:16:43.666203 systemd-networkd[1375]: eth0: Link UP May 17 00:16:43.666217 systemd-networkd[1375]: eth0: Gained carrier May 17 00:16:43.666239 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.667002 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:16:43.673708 systemd-networkd[1375]: eth1: Link UP May 17 00:16:43.673734 systemd-networkd[1375]: eth1: Gained carrier May 17 00:16:43.673758 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.676432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:16:43.677794 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:16:43.683033 systemd-resolved[1330]: Positive Trust Anchors: May 17 00:16:43.683054 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:16:43.683085 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:16:43.690379 systemd-resolved[1330]: Using system hostname 'ci-4081-3-3-n-0eec03f1fd'. May 17 00:16:43.696639 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:16:43.697513 systemd[1]: Reached target network.target - Network. May 17 00:16:43.699868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:16:43.700766 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:16:43.727138 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:16:43.728651 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. May 17 00:16:43.730533 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.730624 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:16:43.747761 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1391) May 17 00:16:43.752116 systemd-networkd[1375]: eth0: DHCPv4 address 138.199.238.255/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:16:43.753288 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. May 17 00:16:43.809086 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:16:43.809217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:16:43.814705 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:16:43.821424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:16:43.825053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:16:43.829257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:16:43.830909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:16:43.830957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:16:43.831318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:16:43.832829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:16:43.841539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:16:43.842977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:16:43.849051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:16:43.855958 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:16:43.858850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:16:43.862228 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:16:43.862392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:16:43.863343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:16:43.878755 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:16:43.878877 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:16:43.878895 kernel: [drm] features: -context_init May 17 00:16:43.882705 kernel: [drm] number of scanouts: 1 May 17 00:16:43.882779 kernel: [drm] number of cap sets: 0 May 17 00:16:43.883068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:16:43.884704 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:16:43.898167 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:16:43.919709 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:16:43.936964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:43.945303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:16:43.945487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:43.952015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:16:44.021717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:16:44.096350 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:16:44.102130 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:16:44.131807 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:16:44.164371 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:16:44.166719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:16:44.167634 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:16:44.168668 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:16:44.170459 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:16:44.172121 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:16:44.172859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:16:44.173509 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:16:44.174249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:16:44.174295 systemd[1]: Reached target paths.target - Path Units. May 17 00:16:44.174793 systemd[1]: Reached target timers.target - Timer Units. May 17 00:16:44.177205 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:16:44.180778 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:16:44.185718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:16:44.188371 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:16:44.189777 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:16:44.190446 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:16:44.191026 systemd[1]: Reached target basic.target - Basic System. May 17 00:16:44.191587 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:16:44.191609 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:16:44.203880 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:16:44.208745 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:16:44.219881 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:16:44.222970 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:16:44.227939 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:16:44.232653 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:16:44.233663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:16:44.237520 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:16:44.244131 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:16:44.247929 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:16:44.253994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:16:44.258968 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:16:44.265708 jq[1452]: false May 17 00:16:44.273005 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:16:44.275530 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:16:44.276081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:16:44.284879 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:16:44.298866 extend-filesystems[1453]: Found loop4 May 17 00:16:44.298866 extend-filesystems[1453]: Found loop5 May 17 00:16:44.298866 extend-filesystems[1453]: Found loop6 May 17 00:16:44.298866 extend-filesystems[1453]: Found loop7 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda May 17 00:16:44.298866 extend-filesystems[1453]: Found sda1 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda2 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda3 May 17 00:16:44.298866 extend-filesystems[1453]: Found usr May 17 00:16:44.298866 extend-filesystems[1453]: Found sda4 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda6 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda7 May 17 00:16:44.298866 extend-filesystems[1453]: Found sda9 May 17 00:16:44.298866 extend-filesystems[1453]: Checking size of /dev/sda9 May 17 00:16:44.291848 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:16:44.367016 coreos-metadata[1448]: May 17 00:16:44.352 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:16:44.367016 coreos-metadata[1448]: May 17 00:16:44.352 INFO Fetch successful May 17 00:16:44.367016 coreos-metadata[1448]: May 17 00:16:44.352 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:16:44.367016 coreos-metadata[1448]: May 17 00:16:44.352 INFO Fetch successful May 17 00:16:44.388612 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:16:44.388640 extend-filesystems[1453]: Resized partition /dev/sda9 May 17 00:16:44.326938 dbus-daemon[1449]: [system] SELinux support is enabled May 17 00:16:44.295628 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:16:44.392340 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) May 17 00:16:44.300202 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:16:44.300790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:16:44.304932 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:16:44.305136 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:16:44.410455 tar[1470]: linux-arm64/helm May 17 00:16:44.327173 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:16:44.410751 jq[1462]: true May 17 00:16:44.336948 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:16:44.336973 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:16:44.347479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:16:44.347505 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:16:44.396095 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:16:44.406476 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:16:44.406652 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:16:44.442950 jq[1489]: true May 17 00:16:44.448790 update_engine[1461]: I20250517 00:16:44.446197 1461 main.cc:92] Flatcar Update Engine starting May 17 00:16:44.461711 systemd[1]: Started update-engine.service - Update Engine. May 17 00:16:44.465222 update_engine[1461]: I20250517 00:16:44.464973 1461 update_check_scheduler.cc:74] Next update check in 9m26s May 17 00:16:44.467955 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:16:44.516364 systemd-logind[1460]: New seat seat0. May 17 00:16:44.537154 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:16:44.537223 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1380) May 17 00:16:44.537517 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:16:44.542147 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:16:44.542147 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:16:44.542147 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:16:44.537533 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:16:44.549884 extend-filesystems[1453]: Resized filesystem in /dev/sda9 May 17 00:16:44.549884 extend-filesystems[1453]: Found sr0 May 17 00:16:44.537742 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:16:44.543387 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:16:44.543577 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:16:44.593136 bash[1520]: Updated "/home/core/.ssh/authorized_keys" May 17 00:16:44.593424 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:16:44.596322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:16:44.597093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:16:44.608006 systemd[1]: Starting sshkeys.service... May 17 00:16:44.631371 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:16:44.645237 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:16:44.668734 coreos-metadata[1528]: May 17 00:16:44.668 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:16:44.675391 coreos-metadata[1528]: May 17 00:16:44.675 INFO Fetch successful May 17 00:16:44.681612 unknown[1528]: wrote ssh authorized keys file for user: core May 17 00:16:44.725722 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" May 17 00:16:44.720649 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:16:44.728813 systemd[1]: Finished sshkeys.service. May 17 00:16:44.763882 containerd[1474]: time="2025-05-17T00:16:44.763752480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:16:44.781438 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:16:44.821754 containerd[1474]: time="2025-05-17T00:16:44.817980120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.822830 systemd-networkd[1375]: eth1: Gained IPv6LL May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.822663720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.822888160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.822929880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.823120400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.823148800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.823228080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:44.823300 containerd[1474]: time="2025-05-17T00:16:44.823245240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.823523 containerd[1474]: time="2025-05-17T00:16:44.823450480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:44.823523 containerd[1474]: time="2025-05-17T00:16:44.823483480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.823523 containerd[1474]: time="2025-05-17T00:16:44.823506640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:44.823594 containerd[1474]: time="2025-05-17T00:16:44.823523840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.823630 containerd[1474]: time="2025-05-17T00:16:44.823608200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.823909 containerd[1474]: time="2025-05-17T00:16:44.823878960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:16:44.824294 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. May 17 00:16:44.824652 containerd[1474]: time="2025-05-17T00:16:44.824612480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:16:44.824652 containerd[1474]: time="2025-05-17T00:16:44.824646840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:16:44.824912 containerd[1474]: time="2025-05-17T00:16:44.824852840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:16:44.824945 containerd[1474]: time="2025-05-17T00:16:44.824927280Z" level=info msg="metadata content store policy set" policy=shared May 17 00:16:44.830405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:16:44.833156 containerd[1474]: time="2025-05-17T00:16:44.832767800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:16:44.833156 containerd[1474]: time="2025-05-17T00:16:44.832877480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:16:44.833156 containerd[1474]: time="2025-05-17T00:16:44.832896960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:16:44.833156 containerd[1474]: time="2025-05-17T00:16:44.832912960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:16:44.833156 containerd[1474]: time="2025-05-17T00:16:44.832929400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.833877720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834173920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834282080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834298840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834315840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834328880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834342240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834354760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834368360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834383360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834395640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834408000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834420560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:16:44.835928 containerd[1474]: time="2025-05-17T00:16:44.834441160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:16:44.833968 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834456440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834469120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834483840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834495960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834509120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834522120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834538000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834550800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834568920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834581520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834592720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834605160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834624840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834645280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836283 containerd[1474]: time="2025-05-17T00:16:44.834656880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:16:44.836528 containerd[1474]: time="2025-05-17T00:16:44.834666920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838404760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838455600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838469680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838484080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838494600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838512040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838523560Z" level=info msg="NRI interface is disabled by configuration." May 17 00:16:44.840713 containerd[1474]: time="2025-05-17T00:16:44.838533960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.841264800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.841355400Z" level=info msg="Connect containerd service" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.841403240Z" level=info msg="using legacy CRI server" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.841411360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.841509920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.842302840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843388600Z" level=info msg="Start subscribing containerd event" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843454720Z" level=info msg="Start recovering state" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843531680Z" level=info msg="Start event monitor" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843544880Z" level=info msg="Start snapshots syncer" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843555080Z" level=info msg="Start cni network conf syncer for default" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.843562760Z" level=info msg="Start streaming server" May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.844096280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.844153080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:16:44.848157 containerd[1474]: time="2025-05-17T00:16:44.844200680Z" level=info msg="containerd successfully booted in 0.081298s" May 17 00:16:44.849948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:44.852980 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:16:44.854162 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:16:44.889862 systemd-networkd[1375]: eth0: Gained IPv6LL May 17 00:16:44.890312 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. May 17 00:16:44.911264 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:16:45.280965 tar[1470]: linux-arm64/LICENSE May 17 00:16:45.280965 tar[1470]: linux-arm64/README.md May 17 00:16:45.306470 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:16:45.681881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:45.692739 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:16:46.221369 kubelet[1562]: E0517 00:16:46.221310 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:16:46.224875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:16:46.225117 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:16:46.761441 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:16:46.783365 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:16:46.804327 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:16:46.812426 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:16:46.812721 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:16:46.821491 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:16:46.842137 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:16:46.849132 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:16:46.856329 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:16:46.858435 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:16:46.859912 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:16:46.865402 systemd[1]: Startup finished in 790ms (kernel) + 5.731s (initrd) + 5.472s (userspace) = 11.994s. May 17 00:16:56.475662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:16:56.481062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:16:56.625086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:16:56.628269 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:16:56.678340 kubelet[1598]: E0517 00:16:56.678279 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:16:56.680898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:16:56.681042 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:06.931928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:17:06.939104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:07.064254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:07.071013 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:07.116408 kubelet[1613]: E0517 00:17:07.116339 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:07.120231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:07.120607 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:15.703065 systemd-resolved[1330]: Clock change detected. Flushing caches. May 17 00:17:15.703233 systemd-timesyncd[1361]: Contacted time server 141.98.136.83:123 (2.flatcar.pool.ntp.org). May 17 00:17:15.703315 systemd-timesyncd[1361]: Initial clock synchronization to Sat 2025-05-17 00:17:15.702971 UTC. May 17 00:17:17.822190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:17:17.830790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:17.954981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:17.968507 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:18.015498 kubelet[1628]: E0517 00:17:18.015286 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:18.018596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:18.018786 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:28.269906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:17:28.277869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:28.404576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:28.416420 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:28.474313 kubelet[1644]: E0517 00:17:28.474246 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:28.478702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:28.478964 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:30.330425 update_engine[1461]: I20250517 00:17:30.329576 1461 update_attempter.cc:509] Updating boot flags... May 17 00:17:30.376492 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1660) May 17 00:17:30.443870 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1663) May 17 00:17:38.634960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:17:38.641903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:38.770624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:38.780117 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:38.827220 kubelet[1677]: E0517 00:17:38.827148 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:38.829954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:38.830111 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:48.885179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:17:48.894236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:49.048748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:49.048935 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:49.096378 kubelet[1692]: E0517 00:17:49.096313 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:49.099409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:49.099670 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:17:59.135421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:17:59.143832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:17:59.271094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:17:59.288238 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:17:59.329766 kubelet[1707]: E0517 00:17:59.329688 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:17:59.333584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:17:59.333810 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:09.385374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:18:09.391887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:18:09.518167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:18:09.524535 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:18:09.568204 kubelet[1721]: E0517 00:18:09.568147 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:18:09.571862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:18:09.572223 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:19.635255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:18:19.650797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:18:19.833084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:18:19.839093 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:18:19.885411 kubelet[1735]: E0517 00:18:19.885096 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:18:19.887963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:18:19.888224 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:29.787899 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:18:29.789450 systemd[1]: Started sshd@0-138.199.238.255:22-139.178.68.195:40466.service - OpenSSH per-connection server daemon (139.178.68.195:40466). May 17 00:18:30.134786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:18:30.142261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:18:30.297670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:18:30.309450 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:18:30.352556 kubelet[1754]: E0517 00:18:30.352394 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:18:30.355431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:18:30.355667 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:30.787372 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 40466 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:30.791714 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:30.804646 systemd-logind[1460]: New session 1 of user core. May 17 00:18:30.807736 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:18:30.814821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:18:30.831623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:18:30.838885 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:18:30.842708 (systemd)[1763]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:18:30.951426 systemd[1763]: Queued start job for default target default.target. May 17 00:18:30.960246 systemd[1763]: Created slice app.slice - User Application Slice. May 17 00:18:30.960293 systemd[1763]: Reached target paths.target - Paths. May 17 00:18:30.960311 systemd[1763]: Reached target timers.target - Timers. May 17 00:18:30.962571 systemd[1763]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:18:30.977735 systemd[1763]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:18:30.977874 systemd[1763]: Reached target sockets.target - Sockets. May 17 00:18:30.977890 systemd[1763]: Reached target basic.target - Basic System. May 17 00:18:30.977957 systemd[1763]: Reached target default.target - Main User Target. May 17 00:18:30.977990 systemd[1763]: Startup finished in 128ms. May 17 00:18:30.978303 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:18:30.988998 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:18:31.689569 systemd[1]: Started sshd@1-138.199.238.255:22-139.178.68.195:40470.service - OpenSSH per-connection server daemon (139.178.68.195:40470). May 17 00:18:32.705990 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 40470 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:32.708337 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:32.715584 systemd-logind[1460]: New session 2 of user core. May 17 00:18:32.718288 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:18:33.402130 sshd[1774]: pam_unix(sshd:session): session closed for user core May 17 00:18:33.408139 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. May 17 00:18:33.408653 systemd[1]: sshd@1-138.199.238.255:22-139.178.68.195:40470.service: Deactivated successfully. May 17 00:18:33.410345 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:18:33.412308 systemd-logind[1460]: Removed session 2. May 17 00:18:33.583019 systemd[1]: Started sshd@2-138.199.238.255:22-139.178.68.195:40486.service - OpenSSH per-connection server daemon (139.178.68.195:40486). May 17 00:18:34.569966 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 40486 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:34.573337 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:34.579535 systemd-logind[1460]: New session 3 of user core. May 17 00:18:34.587991 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:18:35.246623 sshd[1781]: pam_unix(sshd:session): session closed for user core May 17 00:18:35.251599 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. May 17 00:18:35.252751 systemd[1]: sshd@2-138.199.238.255:22-139.178.68.195:40486.service: Deactivated successfully. May 17 00:18:35.254919 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:18:35.256297 systemd-logind[1460]: Removed session 3. May 17 00:18:35.428270 systemd[1]: Started sshd@3-138.199.238.255:22-139.178.68.195:57550.service - OpenSSH per-connection server daemon (139.178.68.195:57550). May 17 00:18:36.404908 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 57550 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:36.407559 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:36.413967 systemd-logind[1460]: New session 4 of user core. May 17 00:18:36.428728 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:18:37.090858 sshd[1788]: pam_unix(sshd:session): session closed for user core May 17 00:18:37.095953 systemd[1]: sshd@3-138.199.238.255:22-139.178.68.195:57550.service: Deactivated successfully. May 17 00:18:37.097725 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:18:37.099613 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. May 17 00:18:37.100781 systemd-logind[1460]: Removed session 4. May 17 00:18:37.268900 systemd[1]: Started sshd@4-138.199.238.255:22-139.178.68.195:57566.service - OpenSSH per-connection server daemon (139.178.68.195:57566). May 17 00:18:38.266125 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 57566 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:38.269088 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:38.275732 systemd-logind[1460]: New session 5 of user core. May 17 00:18:38.283190 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:18:38.810783 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:18:38.811119 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:18:38.829780 sudo[1798]: pam_unix(sudo:session): session closed for user root May 17 00:18:38.992393 sshd[1795]: pam_unix(sshd:session): session closed for user core May 17 00:18:38.997087 systemd[1]: sshd@4-138.199.238.255:22-139.178.68.195:57566.service: Deactivated successfully. May 17 00:18:38.999030 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:18:39.001104 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. May 17 00:18:39.003144 systemd-logind[1460]: Removed session 5. May 17 00:18:39.164189 systemd[1]: Started sshd@5-138.199.238.255:22-139.178.68.195:57572.service - OpenSSH per-connection server daemon (139.178.68.195:57572). May 17 00:18:40.141882 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 57572 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:40.144412 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:40.151646 systemd-logind[1460]: New session 6 of user core. May 17 00:18:40.155807 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:18:40.385121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:18:40.394883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:18:40.546784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:18:40.547077 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:18:40.593202 kubelet[1814]: E0517 00:18:40.593156 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:18:40.595368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:18:40.595594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:40.667192 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:18:40.667602 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:18:40.672625 sudo[1822]: pam_unix(sudo:session): session closed for user root May 17 00:18:40.680198 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:18:40.680537 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:18:40.701934 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:18:40.704686 auditctl[1825]: No rules May 17 00:18:40.705085 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:18:40.705258 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:18:40.708950 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:18:40.752333 augenrules[1843]: No rules May 17 00:18:40.753966 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:18:40.755674 sudo[1821]: pam_unix(sudo:session): session closed for user root May 17 00:18:40.916021 sshd[1803]: pam_unix(sshd:session): session closed for user core May 17 00:18:40.919832 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. May 17 00:18:40.921601 systemd[1]: sshd@5-138.199.238.255:22-139.178.68.195:57572.service: Deactivated successfully. May 17 00:18:40.923963 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:18:40.925289 systemd-logind[1460]: Removed session 6. May 17 00:18:41.095817 systemd[1]: Started sshd@6-138.199.238.255:22-139.178.68.195:57576.service - OpenSSH per-connection server daemon (139.178.68.195:57576). May 17 00:18:42.110937 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 57576 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:42.113180 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:42.118000 systemd-logind[1460]: New session 7 of user core. May 17 00:18:42.124796 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:18:42.645526 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:18:42.645821 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:18:42.973050 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:18:42.973053 (dockerd)[1870]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:18:43.227999 dockerd[1870]: time="2025-05-17T00:18:43.227241191Z" level=info msg="Starting up" May 17 00:18:43.335317 dockerd[1870]: time="2025-05-17T00:18:43.335046202Z" level=info msg="Loading containers: start." May 17 00:18:43.446542 kernel: Initializing XFRM netlink socket May 17 00:18:43.534963 systemd-networkd[1375]: docker0: Link UP May 17 00:18:43.559793 dockerd[1870]: time="2025-05-17T00:18:43.559707344Z" level=info msg="Loading containers: done." May 17 00:18:43.580329 dockerd[1870]: time="2025-05-17T00:18:43.580090146Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:18:43.580329 dockerd[1870]: time="2025-05-17T00:18:43.580230066Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:18:43.580682 dockerd[1870]: time="2025-05-17T00:18:43.580362306Z" level=info msg="Daemon has completed initialization" May 17 00:18:43.625182 dockerd[1870]: time="2025-05-17T00:18:43.624097111Z" level=info msg="API listen on /run/docker.sock" May 17 00:18:43.624278 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:18:44.728507 containerd[1474]: time="2025-05-17T00:18:44.728347338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:18:45.431454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970023855.mount: Deactivated successfully. May 17 00:18:46.809546 containerd[1474]: time="2025-05-17T00:18:46.807890121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:46.810921 containerd[1474]: time="2025-05-17T00:18:46.810330881Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25652066" May 17 00:18:46.810921 containerd[1474]: time="2025-05-17T00:18:46.810378721Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:46.814429 containerd[1474]: time="2025-05-17T00:18:46.814363041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:46.816435 containerd[1474]: time="2025-05-17T00:18:46.815821842Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 2.087427104s" May 17 00:18:46.816435 containerd[1474]: time="2025-05-17T00:18:46.815877282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:18:46.818859 containerd[1474]: time="2025-05-17T00:18:46.818676122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:18:48.339238 containerd[1474]: time="2025-05-17T00:18:48.339144760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:48.341557 containerd[1474]: time="2025-05-17T00:18:48.341013440Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459548" May 17 00:18:48.342985 containerd[1474]: time="2025-05-17T00:18:48.342918401Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:48.347907 containerd[1474]: time="2025-05-17T00:18:48.347835601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:48.349622 containerd[1474]: time="2025-05-17T00:18:48.349455281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.530608479s" May 17 00:18:48.349622 containerd[1474]: time="2025-05-17T00:18:48.349521601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:18:48.351156 containerd[1474]: time="2025-05-17T00:18:48.351123281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:18:49.951306 containerd[1474]: time="2025-05-17T00:18:49.951241554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:49.953250 containerd[1474]: time="2025-05-17T00:18:49.952827315Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125299" May 17 00:18:49.954370 containerd[1474]: time="2025-05-17T00:18:49.954297235Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:49.959403 containerd[1474]: time="2025-05-17T00:18:49.957622475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:49.959403 containerd[1474]: time="2025-05-17T00:18:49.959008395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.607730354s" May 17 00:18:49.959403 containerd[1474]: time="2025-05-17T00:18:49.959045355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:18:49.959906 containerd[1474]: time="2025-05-17T00:18:49.959872915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:18:50.639952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:18:50.648849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:18:50.773656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:18:50.778415 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:18:50.822577 kubelet[2079]: E0517 00:18:50.822312 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:18:50.825425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:18:50.825577 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:18:51.069995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225500127.mount: Deactivated successfully. May 17 00:18:51.390363 containerd[1474]: time="2025-05-17T00:18:51.389530166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:51.391719 containerd[1474]: time="2025-05-17T00:18:51.391668446Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871401" May 17 00:18:51.393554 containerd[1474]: time="2025-05-17T00:18:51.393482046Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:51.398152 containerd[1474]: time="2025-05-17T00:18:51.398085127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:51.399961 containerd[1474]: time="2025-05-17T00:18:51.399691167Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.439689772s" May 17 00:18:51.399961 containerd[1474]: time="2025-05-17T00:18:51.399752287Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:18:51.400488 containerd[1474]: time="2025-05-17T00:18:51.400428207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:18:52.004755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711595872.mount: Deactivated successfully. May 17 00:18:52.863501 containerd[1474]: time="2025-05-17T00:18:52.862019966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:52.863501 containerd[1474]: time="2025-05-17T00:18:52.863407530Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" May 17 00:18:52.864693 containerd[1474]: time="2025-05-17T00:18:52.864653693Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:52.870155 containerd[1474]: time="2025-05-17T00:18:52.870100428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:52.874047 containerd[1474]: time="2025-05-17T00:18:52.871874273Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.471307306s" May 17 00:18:52.874047 containerd[1474]: time="2025-05-17T00:18:52.871927394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:18:52.874047 containerd[1474]: time="2025-05-17T00:18:52.872564915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:18:53.465951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810879407.mount: Deactivated successfully. May 17 00:18:53.471992 containerd[1474]: time="2025-05-17T00:18:53.471908598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:53.473547 containerd[1474]: time="2025-05-17T00:18:53.473215670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:18:53.474675 containerd[1474]: time="2025-05-17T00:18:53.474624464Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:53.477585 containerd[1474]: time="2025-05-17T00:18:53.477539415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:53.479186 containerd[1474]: time="2025-05-17T00:18:53.478803686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 606.206291ms" May 17 00:18:53.479186 containerd[1474]: time="2025-05-17T00:18:53.478839807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:18:53.479837 containerd[1474]: time="2025-05-17T00:18:53.479573505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:18:54.041192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634865671.mount: Deactivated successfully. May 17 00:18:55.863680 containerd[1474]: time="2025-05-17T00:18:55.863601028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:55.867113 containerd[1474]: time="2025-05-17T00:18:55.866603977Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 17 00:18:55.868535 containerd[1474]: time="2025-05-17T00:18:55.868455980Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:18:55.878815 containerd[1474]: time="2025-05-17T00:18:55.878737057Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.399128952s" May 17 00:18:55.878815 containerd[1474]: time="2025-05-17T00:18:55.878785659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:18:55.879008 containerd[1474]: time="2025-05-17T00:18:55.878941902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:19:00.885373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 17 00:19:00.893827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:01.032733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:01.044101 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:19:01.098474 kubelet[2229]: E0517 00:19:01.096277 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:19:01.099149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:19:01.099304 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:19:01.353203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:01.361793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:01.394441 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-7.scope)... May 17 00:19:01.394655 systemd[1]: Reloading... May 17 00:19:01.521494 zram_generator::config[2285]: No configuration found. May 17 00:19:01.626976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:01.699501 systemd[1]: Reloading finished in 304 ms. May 17 00:19:01.771334 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:19:01.771777 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:19:01.773094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:01.782951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:01.914995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:01.927961 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:19:01.975907 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:19:01.975907 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:19:01.975907 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:19:01.976287 kubelet[2333]: I0517 00:19:01.975950 2333 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:19:02.491123 kubelet[2333]: I0517 00:19:02.491077 2333 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:19:02.493503 kubelet[2333]: I0517 00:19:02.491298 2333 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:19:02.493503 kubelet[2333]: I0517 00:19:02.491632 2333 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:19:02.521717 kubelet[2333]: E0517 00:19:02.521664 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.238.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:02.524821 kubelet[2333]: I0517 00:19:02.524779 2333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:19:02.535371 kubelet[2333]: E0517 00:19:02.535320 2333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:19:02.535577 kubelet[2333]: I0517 00:19:02.535560 2333 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:19:02.539578 kubelet[2333]: I0517 00:19:02.539548 2333 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:19:02.540850 kubelet[2333]: I0517 00:19:02.540822 2333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:19:02.541169 kubelet[2333]: I0517 00:19:02.541132 2333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:19:02.541418 kubelet[2333]: I0517 00:19:02.541230 2333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-0eec03f1fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:19:02.541575 kubelet[2333]: I0517 00:19:02.541561 2333 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:19:02.541632 kubelet[2333]: I0517 00:19:02.541624 2333 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:19:02.541968 kubelet[2333]: I0517 00:19:02.541954 2333 state_mem.go:36] "Initialized new in-memory state store" May 17 00:19:02.545356 kubelet[2333]: I0517 00:19:02.545299 2333 kubelet.go:408] "Attempting to sync node with API server" May 17 00:19:02.545649 kubelet[2333]: I0517 00:19:02.545624 2333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:19:02.545814 kubelet[2333]: I0517 00:19:02.545792 2333 kubelet.go:314] "Adding apiserver pod source" May 17 00:19:02.546000 kubelet[2333]: I0517 00:19:02.545975 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:19:02.552923 kubelet[2333]: W0517 00:19:02.552842 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.238.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-0eec03f1fd&limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:02.553057 kubelet[2333]: E0517 00:19:02.552968 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.238.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-0eec03f1fd&limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:02.553489 kubelet[2333]: W0517 00:19:02.553436 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.238.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:02.553818 kubelet[2333]: E0517 00:19:02.553789 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.238.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:02.554124 kubelet[2333]: I0517 00:19:02.554102 2333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:19:02.554997 kubelet[2333]: I0517 00:19:02.554959 2333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:19:02.555091 kubelet[2333]: W0517 00:19:02.555077 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:19:02.557486 kubelet[2333]: I0517 00:19:02.557453 2333 server.go:1274] "Started kubelet" May 17 00:19:02.559841 kubelet[2333]: I0517 00:19:02.559804 2333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:19:02.560717 kubelet[2333]: I0517 00:19:02.560660 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:19:02.561079 kubelet[2333]: I0517 00:19:02.561008 2333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:19:02.561416 kubelet[2333]: I0517 00:19:02.561401 2333 server.go:449] "Adding debug handlers to kubelet server" May 17 00:19:02.562422 kubelet[2333]: E0517 00:19:02.561177 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.238.255:6443/api/v1/namespaces/default/events\": dial tcp 138.199.238.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-0eec03f1fd.18402875f809c723 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-0eec03f1fd,UID:ci-4081-3-3-n-0eec03f1fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-0eec03f1fd,},FirstTimestamp:2025-05-17 00:19:02.557427491 +0000 UTC m=+0.625492091,LastTimestamp:2025-05-17 00:19:02.557427491 +0000 UTC m=+0.625492091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-0eec03f1fd,}" May 17 00:19:02.568032 kubelet[2333]: I0517 00:19:02.568006 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:19:02.569429 kubelet[2333]: I0517 00:19:02.569270 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:19:02.573272 kubelet[2333]: I0517 00:19:02.573245 2333 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:19:02.573841 kubelet[2333]: E0517 00:19:02.573816 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-0eec03f1fd\" not found" May 17 00:19:02.574665 kubelet[2333]: I0517 00:19:02.574648 2333 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:19:02.574918 kubelet[2333]: I0517 00:19:02.574886 2333 reconciler.go:26] "Reconciler: start to sync state" May 17 00:19:02.578167 kubelet[2333]: E0517 00:19:02.578144 2333 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:19:02.578443 kubelet[2333]: W0517 00:19:02.578398 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.238.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:02.578571 kubelet[2333]: E0517 00:19:02.578549 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.238.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:02.578758 kubelet[2333]: E0517 00:19:02.578733 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.238.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-0eec03f1fd?timeout=10s\": dial tcp 138.199.238.255:6443: connect: connection refused" interval="200ms" May 17 00:19:02.579054 kubelet[2333]: I0517 00:19:02.579028 2333 factory.go:221] Registration of the systemd container factory successfully May 17 00:19:02.579243 kubelet[2333]: I0517 00:19:02.579224 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:19:02.581549 kubelet[2333]: I0517 00:19:02.581450 2333 factory.go:221] Registration of the containerd container factory successfully May 17 00:19:02.589953 kubelet[2333]: I0517 00:19:02.589876 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:19:02.591086 kubelet[2333]: I0517 00:19:02.591045 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:19:02.591086 kubelet[2333]: I0517 00:19:02.591079 2333 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:19:02.591205 kubelet[2333]: I0517 00:19:02.591102 2333 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:19:02.591205 kubelet[2333]: E0517 00:19:02.591147 2333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:19:02.598980 kubelet[2333]: W0517 00:19:02.598869 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.238.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:02.598980 kubelet[2333]: E0517 00:19:02.598976 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.238.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:02.622990 kubelet[2333]: I0517 00:19:02.622964 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:19:02.622990 kubelet[2333]: I0517 00:19:02.622982 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:19:02.623143 kubelet[2333]: I0517 00:19:02.623003 2333 state_mem.go:36] "Initialized new in-memory state store" May 17 00:19:02.625345 kubelet[2333]: I0517 00:19:02.625316 2333 policy_none.go:49] "None policy: Start" May 17 00:19:02.626267 kubelet[2333]: I0517 00:19:02.626250 2333 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:19:02.626344 kubelet[2333]: I0517 00:19:02.626283 2333 state_mem.go:35] "Initializing new in-memory state store" May 17 00:19:02.632970 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:19:02.649594 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:19:02.654001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:19:02.663292 kubelet[2333]: I0517 00:19:02.662256 2333 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:19:02.663292 kubelet[2333]: I0517 00:19:02.662720 2333 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:19:02.663292 kubelet[2333]: I0517 00:19:02.662750 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:19:02.663292 kubelet[2333]: I0517 00:19:02.663241 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:19:02.667685 kubelet[2333]: E0517 00:19:02.667627 2333 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-0eec03f1fd\" not found" May 17 00:19:02.706762 systemd[1]: Created slice kubepods-burstable-poda660b3c5596bfe45bdd1eef2afff8e5e.slice - libcontainer container kubepods-burstable-poda660b3c5596bfe45bdd1eef2afff8e5e.slice. May 17 00:19:02.718593 systemd[1]: Created slice kubepods-burstable-podac42784046022700141e9a7f5fca8f36.slice - libcontainer container kubepods-burstable-podac42784046022700141e9a7f5fca8f36.slice. May 17 00:19:02.724171 systemd[1]: Created slice kubepods-burstable-pod0e29f16f7d97f99ea2d8ba51d1e4595c.slice - libcontainer container kubepods-burstable-pod0e29f16f7d97f99ea2d8ba51d1e4595c.slice. May 17 00:19:02.766677 kubelet[2333]: I0517 00:19:02.766107 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.768630 kubelet[2333]: E0517 00:19:02.768577 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.238.255:6443/api/v1/nodes\": dial tcp 138.199.238.255:6443: connect: connection refused" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.780195 kubelet[2333]: E0517 00:19:02.780122 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.238.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-0eec03f1fd?timeout=10s\": dial tcp 138.199.238.255:6443: connect: connection refused" interval="400ms" May 17 00:19:02.876108 kubelet[2333]: I0517 00:19:02.876019 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876383 kubelet[2333]: I0517 00:19:02.876292 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876738 kubelet[2333]: I0517 00:19:02.876521 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e29f16f7d97f99ea2d8ba51d1e4595c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-0eec03f1fd\" (UID: \"0e29f16f7d97f99ea2d8ba51d1e4595c\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876738 kubelet[2333]: I0517 00:19:02.876612 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876738 kubelet[2333]: I0517 00:19:02.876639 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876738 kubelet[2333]: I0517 00:19:02.876686 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.876738 kubelet[2333]: I0517 00:19:02.876713 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.877106 kubelet[2333]: I0517 00:19:02.877006 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.877106 kubelet[2333]: I0517 00:19:02.877072 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.971890 kubelet[2333]: I0517 00:19:02.971814 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:02.972225 kubelet[2333]: E0517 00:19:02.972187 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.238.255:6443/api/v1/nodes\": dial tcp 138.199.238.255:6443: connect: connection refused" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:03.016378 containerd[1474]: time="2025-05-17T00:19:03.016041610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-0eec03f1fd,Uid:a660b3c5596bfe45bdd1eef2afff8e5e,Namespace:kube-system,Attempt:0,}" May 17 00:19:03.022946 containerd[1474]: time="2025-05-17T00:19:03.022743614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-0eec03f1fd,Uid:ac42784046022700141e9a7f5fca8f36,Namespace:kube-system,Attempt:0,}" May 17 00:19:03.027928 containerd[1474]: time="2025-05-17T00:19:03.027838629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-0eec03f1fd,Uid:0e29f16f7d97f99ea2d8ba51d1e4595c,Namespace:kube-system,Attempt:0,}" May 17 00:19:03.181127 kubelet[2333]: E0517 00:19:03.181069 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.238.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-0eec03f1fd?timeout=10s\": dial tcp 138.199.238.255:6443: connect: connection refused" interval="800ms" May 17 00:19:03.375516 kubelet[2333]: I0517 00:19:03.375386 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:03.375987 kubelet[2333]: E0517 00:19:03.375883 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.238.255:6443/api/v1/nodes\": dial tcp 138.199.238.255:6443: connect: connection refused" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:03.558114 kubelet[2333]: W0517 00:19:03.558001 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.238.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:03.558114 kubelet[2333]: E0517 00:19:03.558071 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.238.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:03.559803 kubelet[2333]: W0517 00:19:03.559633 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.238.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:03.559803 kubelet[2333]: E0517 00:19:03.559731 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.238.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:03.577041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266768432.mount: Deactivated successfully. May 17 00:19:03.585913 containerd[1474]: time="2025-05-17T00:19:03.585820914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:19:03.587928 containerd[1474]: time="2025-05-17T00:19:03.587788071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:19:03.589312 containerd[1474]: time="2025-05-17T00:19:03.588137197Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:19:03.589312 containerd[1474]: time="2025-05-17T00:19:03.589267098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:19:03.590871 containerd[1474]: time="2025-05-17T00:19:03.590329118Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:19:03.591864 containerd[1474]: time="2025-05-17T00:19:03.591831626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:19:03.592217 containerd[1474]: time="2025-05-17T00:19:03.592193552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:19:03.595582 containerd[1474]: time="2025-05-17T00:19:03.595540494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:19:03.597545 containerd[1474]: time="2025-05-17T00:19:03.597506531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.51902ms" May 17 00:19:03.600421 containerd[1474]: time="2025-05-17T00:19:03.600378464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.176131ms" May 17 00:19:03.601155 containerd[1474]: time="2025-05-17T00:19:03.601118798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.24514ms" May 17 00:19:03.682391 kubelet[2333]: W0517 00:19:03.682335 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.238.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:03.682391 kubelet[2333]: E0517 00:19:03.682393 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.238.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:03.738157 containerd[1474]: time="2025-05-17T00:19:03.737981530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:03.738157 containerd[1474]: time="2025-05-17T00:19:03.738065452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:03.738157 containerd[1474]: time="2025-05-17T00:19:03.738087852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.739314 containerd[1474]: time="2025-05-17T00:19:03.739178313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.739314 containerd[1474]: time="2025-05-17T00:19:03.736695187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:03.739314 containerd[1474]: time="2025-05-17T00:19:03.739261594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:03.739314 containerd[1474]: time="2025-05-17T00:19:03.739280194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.739656 containerd[1474]: time="2025-05-17T00:19:03.739586360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.741273 containerd[1474]: time="2025-05-17T00:19:03.741180550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:03.741415 containerd[1474]: time="2025-05-17T00:19:03.741372633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:03.741517 containerd[1474]: time="2025-05-17T00:19:03.741480315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.743750 containerd[1474]: time="2025-05-17T00:19:03.743662835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:03.769756 systemd[1]: Started cri-containerd-041db958a07c0ee4af0e71c9a823346d4a9d7966b04153c3dec7c66d9fea755f.scope - libcontainer container 041db958a07c0ee4af0e71c9a823346d4a9d7966b04153c3dec7c66d9fea755f. May 17 00:19:03.772412 systemd[1]: Started cri-containerd-750c22c9d9eb04ffdbb509908084d05733c4b7b73a530913ef91deaf0e1b59ce.scope - libcontainer container 750c22c9d9eb04ffdbb509908084d05733c4b7b73a530913ef91deaf0e1b59ce. May 17 00:19:03.773813 systemd[1]: Started cri-containerd-d02d8e1c9c8d60a81e5d81ca1aed76f9785e55e03821633e1373b7cd8422e6c2.scope - libcontainer container d02d8e1c9c8d60a81e5d81ca1aed76f9785e55e03821633e1373b7cd8422e6c2. May 17 00:19:03.817090 containerd[1474]: time="2025-05-17T00:19:03.817038593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-0eec03f1fd,Uid:a660b3c5596bfe45bdd1eef2afff8e5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d02d8e1c9c8d60a81e5d81ca1aed76f9785e55e03821633e1373b7cd8422e6c2\"" May 17 00:19:03.823648 containerd[1474]: time="2025-05-17T00:19:03.823597915Z" level=info msg="CreateContainer within sandbox \"d02d8e1c9c8d60a81e5d81ca1aed76f9785e55e03821633e1373b7cd8422e6c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:19:03.845433 containerd[1474]: time="2025-05-17T00:19:03.845388318Z" level=info msg="CreateContainer within sandbox \"d02d8e1c9c8d60a81e5d81ca1aed76f9785e55e03821633e1373b7cd8422e6c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5543cb2c879a75c0b841752c3c842b6c740157ca017ac1982cd2041e49780682\"" May 17 00:19:03.846558 containerd[1474]: time="2025-05-17T00:19:03.846517859Z" level=info msg="StartContainer for \"5543cb2c879a75c0b841752c3c842b6c740157ca017ac1982cd2041e49780682\"" May 17 00:19:03.850779 containerd[1474]: time="2025-05-17T00:19:03.850399171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-0eec03f1fd,Uid:ac42784046022700141e9a7f5fca8f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"750c22c9d9eb04ffdbb509908084d05733c4b7b73a530913ef91deaf0e1b59ce\"" May 17 00:19:03.857621 containerd[1474]: time="2025-05-17T00:19:03.857402620Z" level=info msg="CreateContainer within sandbox \"750c22c9d9eb04ffdbb509908084d05733c4b7b73a530913ef91deaf0e1b59ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:19:03.864720 containerd[1474]: time="2025-05-17T00:19:03.864647594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-0eec03f1fd,Uid:0e29f16f7d97f99ea2d8ba51d1e4595c,Namespace:kube-system,Attempt:0,} returns sandbox id \"041db958a07c0ee4af0e71c9a823346d4a9d7966b04153c3dec7c66d9fea755f\"" May 17 00:19:03.871233 containerd[1474]: time="2025-05-17T00:19:03.871045713Z" level=info msg="CreateContainer within sandbox \"041db958a07c0ee4af0e71c9a823346d4a9d7966b04153c3dec7c66d9fea755f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:19:03.885624 containerd[1474]: time="2025-05-17T00:19:03.885501180Z" level=info msg="CreateContainer within sandbox \"750c22c9d9eb04ffdbb509908084d05733c4b7b73a530913ef91deaf0e1b59ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab06d90ae170586b13b51013485227afc9bba77c1aeb69735035dc763cfba2dc\"" May 17 00:19:03.886242 containerd[1474]: time="2025-05-17T00:19:03.886215994Z" level=info msg="StartContainer for \"ab06d90ae170586b13b51013485227afc9bba77c1aeb69735035dc763cfba2dc\"" May 17 00:19:03.894730 systemd[1]: Started cri-containerd-5543cb2c879a75c0b841752c3c842b6c740157ca017ac1982cd2041e49780682.scope - libcontainer container 5543cb2c879a75c0b841752c3c842b6c740157ca017ac1982cd2041e49780682. May 17 00:19:03.914079 containerd[1474]: time="2025-05-17T00:19:03.913656221Z" level=info msg="CreateContainer within sandbox \"041db958a07c0ee4af0e71c9a823346d4a9d7966b04153c3dec7c66d9fea755f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1914268d8b29fdb4444c22a039cc7ac5a880453077836af731cc41a6be57ba2\"" May 17 00:19:03.915506 containerd[1474]: time="2025-05-17T00:19:03.914690521Z" level=info msg="StartContainer for \"c1914268d8b29fdb4444c22a039cc7ac5a880453077836af731cc41a6be57ba2\"" May 17 00:19:03.937673 systemd[1]: Started cri-containerd-ab06d90ae170586b13b51013485227afc9bba77c1aeb69735035dc763cfba2dc.scope - libcontainer container ab06d90ae170586b13b51013485227afc9bba77c1aeb69735035dc763cfba2dc. May 17 00:19:03.958379 containerd[1474]: time="2025-05-17T00:19:03.958330728Z" level=info msg="StartContainer for \"5543cb2c879a75c0b841752c3c842b6c740157ca017ac1982cd2041e49780682\" returns successfully" May 17 00:19:03.982334 kubelet[2333]: E0517 00:19:03.981633 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.238.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-0eec03f1fd?timeout=10s\": dial tcp 138.199.238.255:6443: connect: connection refused" interval="1.6s" May 17 00:19:03.985246 systemd[1]: Started cri-containerd-c1914268d8b29fdb4444c22a039cc7ac5a880453077836af731cc41a6be57ba2.scope - libcontainer container c1914268d8b29fdb4444c22a039cc7ac5a880453077836af731cc41a6be57ba2. May 17 00:19:03.993726 containerd[1474]: time="2025-05-17T00:19:03.993673022Z" level=info msg="StartContainer for \"ab06d90ae170586b13b51013485227afc9bba77c1aeb69735035dc763cfba2dc\" returns successfully" May 17 00:19:04.001930 kubelet[2333]: W0517 00:19:04.001765 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.238.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-0eec03f1fd&limit=500&resourceVersion=0": dial tcp 138.199.238.255:6443: connect: connection refused May 17 00:19:04.001930 kubelet[2333]: E0517 00:19:04.001847 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.238.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-0eec03f1fd&limit=500&resourceVersion=0\": dial tcp 138.199.238.255:6443: connect: connection refused" logger="UnhandledError" May 17 00:19:04.053490 containerd[1474]: time="2025-05-17T00:19:04.052446164Z" level=info msg="StartContainer for \"c1914268d8b29fdb4444c22a039cc7ac5a880453077836af731cc41a6be57ba2\" returns successfully" May 17 00:19:04.179581 kubelet[2333]: I0517 00:19:04.179545 2333 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:06.717592 kubelet[2333]: E0517 00:19:06.717542 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-0eec03f1fd\" not found" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:06.844501 kubelet[2333]: I0517 00:19:06.844180 2333 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:06.844501 kubelet[2333]: E0517 00:19:06.844222 2333 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-0eec03f1fd\": node \"ci-4081-3-3-n-0eec03f1fd\" not found" May 17 00:19:06.847284 kubelet[2333]: E0517 00:19:06.847088 2333 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-0eec03f1fd.18402875f809c723 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-0eec03f1fd,UID:ci-4081-3-3-n-0eec03f1fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-0eec03f1fd,},FirstTimestamp:2025-05-17 00:19:02.557427491 +0000 UTC m=+0.625492091,LastTimestamp:2025-05-17 00:19:02.557427491 +0000 UTC m=+0.625492091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-0eec03f1fd,}" May 17 00:19:07.555740 kubelet[2333]: I0517 00:19:07.555473 2333 apiserver.go:52] "Watching apiserver" May 17 00:19:07.575874 kubelet[2333]: I0517 00:19:07.575805 2333 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:19:08.885627 systemd[1]: Reloading requested from client PID 2602 ('systemctl') (unit session-7.scope)... May 17 00:19:08.885646 systemd[1]: Reloading... May 17 00:19:08.986514 zram_generator::config[2638]: No configuration found. May 17 00:19:09.112077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:09.201822 systemd[1]: Reloading finished in 315 ms. May 17 00:19:09.242256 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:09.256606 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:19:09.257716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:09.257818 systemd[1]: kubelet.service: Consumed 1.091s CPU time, 129.4M memory peak, 0B memory swap peak. May 17 00:19:09.266238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:09.414367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:09.415026 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:19:09.469018 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:19:09.469018 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:19:09.469018 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:19:09.469354 kubelet[2687]: I0517 00:19:09.469052 2687 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:19:09.480631 kubelet[2687]: I0517 00:19:09.480586 2687 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:19:09.480631 kubelet[2687]: I0517 00:19:09.480619 2687 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:19:09.481016 kubelet[2687]: I0517 00:19:09.480909 2687 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:19:09.482780 kubelet[2687]: I0517 00:19:09.482743 2687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:19:09.488209 kubelet[2687]: I0517 00:19:09.487577 2687 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:19:09.492016 kubelet[2687]: E0517 00:19:09.491957 2687 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:19:09.492575 kubelet[2687]: I0517 00:19:09.492489 2687 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:19:09.497428 kubelet[2687]: I0517 00:19:09.497369 2687 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:19:09.497701 kubelet[2687]: I0517 00:19:09.497643 2687 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:19:09.497780 kubelet[2687]: I0517 00:19:09.497750 2687 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:19:09.500524 kubelet[2687]: I0517 00:19:09.497781 2687 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-0eec03f1fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:19:09.500524 kubelet[2687]: I0517 00:19:09.500262 2687 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:19:09.500524 kubelet[2687]: I0517 00:19:09.500276 2687 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:19:09.500524 kubelet[2687]: I0517 00:19:09.500328 2687 state_mem.go:36] "Initialized new in-memory state store" May 17 00:19:09.500524 kubelet[2687]: I0517 00:19:09.500452 2687 kubelet.go:408] "Attempting to sync node with API server" May 17 00:19:09.500845 kubelet[2687]: I0517 00:19:09.500480 2687 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:19:09.500845 kubelet[2687]: I0517 00:19:09.500501 2687 kubelet.go:314] "Adding apiserver pod source" May 17 00:19:09.504603 kubelet[2687]: I0517 00:19:09.502190 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:19:09.511167 kubelet[2687]: I0517 00:19:09.511134 2687 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:19:09.514356 kubelet[2687]: I0517 00:19:09.513130 2687 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:19:09.515942 kubelet[2687]: I0517 00:19:09.515905 2687 server.go:1274] "Started kubelet" May 17 00:19:09.518670 kubelet[2687]: I0517 00:19:09.518491 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:19:09.533039 kubelet[2687]: I0517 00:19:09.532953 2687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:19:09.549525 kubelet[2687]: I0517 00:19:09.533239 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:19:09.549692 kubelet[2687]: I0517 00:19:09.549622 2687 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:19:09.549692 kubelet[2687]: I0517 00:19:09.534277 2687 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:19:09.549759 kubelet[2687]: I0517 00:19:09.533616 2687 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:19:09.550454 kubelet[2687]: I0517 00:19:09.534289 2687 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:19:09.550454 kubelet[2687]: I0517 00:19:09.549940 2687 reconciler.go:26] "Reconciler: start to sync state" May 17 00:19:09.550454 kubelet[2687]: I0517 00:19:09.548191 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:19:09.551487 kubelet[2687]: I0517 00:19:09.551389 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:19:09.551487 kubelet[2687]: I0517 00:19:09.551420 2687 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:19:09.551487 kubelet[2687]: I0517 00:19:09.551449 2687 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:19:09.551761 kubelet[2687]: E0517 00:19:09.551515 2687 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:19:09.551761 kubelet[2687]: E0517 00:19:09.534450 2687 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-0eec03f1fd\" not found" May 17 00:19:09.551885 kubelet[2687]: I0517 00:19:09.541146 2687 factory.go:221] Registration of the systemd container factory successfully May 17 00:19:09.552013 kubelet[2687]: I0517 00:19:09.551985 2687 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:19:09.552281 kubelet[2687]: E0517 00:19:09.548608 2687 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:19:09.552934 kubelet[2687]: I0517 00:19:09.552912 2687 server.go:449] "Adding debug handlers to kubelet server" May 17 00:19:09.564123 kubelet[2687]: I0517 00:19:09.564010 2687 factory.go:221] Registration of the containerd container factory successfully May 17 00:19:09.633262 kubelet[2687]: I0517 00:19:09.633236 2687 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:19:09.633438 kubelet[2687]: I0517 00:19:09.633423 2687 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:19:09.633546 kubelet[2687]: I0517 00:19:09.633535 2687 state_mem.go:36] "Initialized new in-memory state store" May 17 00:19:09.633768 kubelet[2687]: I0517 00:19:09.633752 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:19:09.633923 kubelet[2687]: I0517 00:19:09.633889 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:19:09.634023 kubelet[2687]: I0517 00:19:09.634009 2687 policy_none.go:49] "None policy: Start" May 17 00:19:09.634892 kubelet[2687]: I0517 00:19:09.634869 2687 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:19:09.634965 kubelet[2687]: I0517 00:19:09.634902 2687 state_mem.go:35] "Initializing new in-memory state store" May 17 00:19:09.635115 kubelet[2687]: I0517 00:19:09.635097 2687 state_mem.go:75] "Updated machine memory state" May 17 00:19:09.639712 kubelet[2687]: I0517 00:19:09.639484 2687 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:19:09.639913 kubelet[2687]: I0517 00:19:09.639866 2687 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:19:09.639913 kubelet[2687]: I0517 00:19:09.639885 2687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:19:09.641408 kubelet[2687]: I0517 00:19:09.640690 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:19:09.664785 kubelet[2687]: E0517 00:19:09.664726 2687 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.747217 kubelet[2687]: I0517 00:19:09.747101 2687 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.761288 kubelet[2687]: I0517 00:19:09.761190 2687 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.761553 kubelet[2687]: I0517 00:19:09.761329 2687 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.850838 kubelet[2687]: I0517 00:19:09.850420 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.850838 kubelet[2687]: I0517 00:19:09.850494 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.850838 kubelet[2687]: I0517 00:19:09.850526 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a660b3c5596bfe45bdd1eef2afff8e5e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-0eec03f1fd\" (UID: \"a660b3c5596bfe45bdd1eef2afff8e5e\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.850838 kubelet[2687]: I0517 00:19:09.850557 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.850838 kubelet[2687]: I0517 00:19:09.850583 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.851222 kubelet[2687]: I0517 00:19:09.850608 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.851222 kubelet[2687]: I0517 00:19:09.850721 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e29f16f7d97f99ea2d8ba51d1e4595c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-0eec03f1fd\" (UID: \"0e29f16f7d97f99ea2d8ba51d1e4595c\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.851222 kubelet[2687]: I0517 00:19:09.850756 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.851222 kubelet[2687]: I0517 00:19:09.850780 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac42784046022700141e9a7f5fca8f36-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-0eec03f1fd\" (UID: \"ac42784046022700141e9a7f5fca8f36\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" May 17 00:19:09.879765 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:19:09.880788 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:19:10.359685 sudo[2720]: pam_unix(sudo:session): session closed for user root May 17 00:19:10.504485 kubelet[2687]: I0517 00:19:10.504394 2687 apiserver.go:52] "Watching apiserver" May 17 00:19:10.552665 kubelet[2687]: I0517 00:19:10.552586 2687 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:19:10.582825 kubelet[2687]: I0517 00:19:10.582227 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-0eec03f1fd" podStartSLOduration=1.582204277 podStartE2EDuration="1.582204277s" podCreationTimestamp="2025-05-17 00:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:10.568970953 +0000 UTC m=+1.147419139" watchObservedRunningTime="2025-05-17 00:19:10.582204277 +0000 UTC m=+1.160652463" May 17 00:19:10.612233 kubelet[2687]: I0517 00:19:10.611451 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-0eec03f1fd" podStartSLOduration=2.611408845 podStartE2EDuration="2.611408845s" podCreationTimestamp="2025-05-17 00:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:10.583686339 +0000 UTC m=+1.162134525" watchObservedRunningTime="2025-05-17 00:19:10.611408845 +0000 UTC m=+1.189856991" May 17 00:19:10.634228 kubelet[2687]: I0517 00:19:10.634059 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-0eec03f1fd" podStartSLOduration=1.634039113 podStartE2EDuration="1.634039113s" podCreationTimestamp="2025-05-17 00:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:10.611834732 +0000 UTC m=+1.190282878" watchObservedRunningTime="2025-05-17 00:19:10.634039113 +0000 UTC m=+1.212487259" May 17 00:19:13.103372 sudo[1854]: pam_unix(sudo:session): session closed for user root May 17 00:19:13.267855 sshd[1851]: pam_unix(sshd:session): session closed for user core May 17 00:19:13.274899 systemd[1]: sshd@6-138.199.238.255:22-139.178.68.195:57576.service: Deactivated successfully. May 17 00:19:13.278547 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:19:13.279599 systemd[1]: session-7.scope: Consumed 8.331s CPU time, 150.7M memory peak, 0B memory swap peak. May 17 00:19:13.281298 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. May 17 00:19:13.282575 systemd-logind[1460]: Removed session 7. May 17 00:19:14.290366 kubelet[2687]: I0517 00:19:14.290316 2687 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:19:14.291994 containerd[1474]: time="2025-05-17T00:19:14.291363484Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:19:14.292539 kubelet[2687]: I0517 00:19:14.291681 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:19:15.323118 systemd[1]: Created slice kubepods-besteffort-pod46dbe2b1_33ba_42ac_9ce9_31fe6447148c.slice - libcontainer container kubepods-besteffort-pod46dbe2b1_33ba_42ac_9ce9_31fe6447148c.slice. May 17 00:19:15.342276 systemd[1]: Created slice kubepods-burstable-pod5b6815f8_f74d_4c8f_8edc_714d706aeb05.slice - libcontainer container kubepods-burstable-pod5b6815f8_f74d_4c8f_8edc_714d706aeb05.slice. May 17 00:19:15.388696 kubelet[2687]: I0517 00:19:15.388626 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-etc-cni-netd\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.388696 kubelet[2687]: I0517 00:19:15.388679 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-xtables-lock\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.388696 kubelet[2687]: I0517 00:19:15.388704 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-net\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388724 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46dbe2b1-33ba-42ac-9ce9-31fe6447148c-kube-proxy\") pod \"kube-proxy-m5ppc\" (UID: \"46dbe2b1-33ba-42ac-9ce9-31fe6447148c\") " pod="kube-system/kube-proxy-m5ppc" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388745 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cni-path\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388765 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-kernel\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388784 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46dbe2b1-33ba-42ac-9ce9-31fe6447148c-xtables-lock\") pod \"kube-proxy-m5ppc\" (UID: \"46dbe2b1-33ba-42ac-9ce9-31fe6447148c\") " pod="kube-system/kube-proxy-m5ppc" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388802 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46dbe2b1-33ba-42ac-9ce9-31fe6447148c-lib-modules\") pod \"kube-proxy-m5ppc\" (UID: \"46dbe2b1-33ba-42ac-9ce9-31fe6447148c\") " pod="kube-system/kube-proxy-m5ppc" May 17 00:19:15.389580 kubelet[2687]: I0517 00:19:15.388822 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-run\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388843 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-cgroup\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388866 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-bpf-maps\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388884 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hostproc\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388901 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-lib-modules\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388921 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b6815f8-f74d-4c8f-8edc-714d706aeb05-clustermesh-secrets\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389746 kubelet[2687]: I0517 00:19:15.388938 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-config-path\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389920 kubelet[2687]: I0517 00:19:15.388956 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hubble-tls\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389920 kubelet[2687]: I0517 00:19:15.388974 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gs7x\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-kube-api-access-5gs7x\") pod \"cilium-qnq9t\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " pod="kube-system/cilium-qnq9t" May 17 00:19:15.389920 kubelet[2687]: I0517 00:19:15.388995 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxz2f\" (UniqueName: \"kubernetes.io/projected/46dbe2b1-33ba-42ac-9ce9-31fe6447148c-kube-api-access-lxz2f\") pod \"kube-proxy-m5ppc\" (UID: \"46dbe2b1-33ba-42ac-9ce9-31fe6447148c\") " pod="kube-system/kube-proxy-m5ppc" May 17 00:19:15.437050 systemd[1]: Created slice kubepods-besteffort-pod2a27214d_b7cb_4f59_b6c8_de777421fb64.slice - libcontainer container kubepods-besteffort-pod2a27214d_b7cb_4f59_b6c8_de777421fb64.slice. May 17 00:19:15.591048 kubelet[2687]: I0517 00:19:15.590850 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a27214d-b7cb-4f59-b6c8-de777421fb64-cilium-config-path\") pod \"cilium-operator-5d85765b45-xt968\" (UID: \"2a27214d-b7cb-4f59-b6c8-de777421fb64\") " pod="kube-system/cilium-operator-5d85765b45-xt968" May 17 00:19:15.591048 kubelet[2687]: I0517 00:19:15.590968 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nk8\" (UniqueName: \"kubernetes.io/projected/2a27214d-b7cb-4f59-b6c8-de777421fb64-kube-api-access-d4nk8\") pod \"cilium-operator-5d85765b45-xt968\" (UID: \"2a27214d-b7cb-4f59-b6c8-de777421fb64\") " pod="kube-system/cilium-operator-5d85765b45-xt968" May 17 00:19:15.638134 containerd[1474]: time="2025-05-17T00:19:15.638037874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m5ppc,Uid:46dbe2b1-33ba-42ac-9ce9-31fe6447148c,Namespace:kube-system,Attempt:0,}" May 17 00:19:15.649197 containerd[1474]: time="2025-05-17T00:19:15.648353014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qnq9t,Uid:5b6815f8-f74d-4c8f-8edc-714d706aeb05,Namespace:kube-system,Attempt:0,}" May 17 00:19:15.671752 containerd[1474]: time="2025-05-17T00:19:15.671628528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:15.671752 containerd[1474]: time="2025-05-17T00:19:15.671698489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:15.671752 containerd[1474]: time="2025-05-17T00:19:15.671715529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.673192 containerd[1474]: time="2025-05-17T00:19:15.672911185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.682982 containerd[1474]: time="2025-05-17T00:19:15.682090149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:15.682982 containerd[1474]: time="2025-05-17T00:19:15.682925081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:15.682982 containerd[1474]: time="2025-05-17T00:19:15.682947321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.683605 containerd[1474]: time="2025-05-17T00:19:15.683400287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.704180 systemd[1]: Started cri-containerd-91b3c30a2f3012f860d9feea70c4d0953d58eeee8e01c45cfdc6570442a90a78.scope - libcontainer container 91b3c30a2f3012f860d9feea70c4d0953d58eeee8e01c45cfdc6570442a90a78. May 17 00:19:15.714637 systemd[1]: Started cri-containerd-11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf.scope - libcontainer container 11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf. May 17 00:19:15.746911 containerd[1474]: time="2025-05-17T00:19:15.746853304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xt968,Uid:2a27214d-b7cb-4f59-b6c8-de777421fb64,Namespace:kube-system,Attempt:0,}" May 17 00:19:15.756174 containerd[1474]: time="2025-05-17T00:19:15.756056508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m5ppc,Uid:46dbe2b1-33ba-42ac-9ce9-31fe6447148c,Namespace:kube-system,Attempt:0,} returns sandbox id \"91b3c30a2f3012f860d9feea70c4d0953d58eeee8e01c45cfdc6570442a90a78\"" May 17 00:19:15.774251 containerd[1474]: time="2025-05-17T00:19:15.773346621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qnq9t,Uid:5b6815f8-f74d-4c8f-8edc-714d706aeb05,Namespace:kube-system,Attempt:0,} returns sandbox id \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\"" May 17 00:19:15.776920 containerd[1474]: time="2025-05-17T00:19:15.776855829Z" level=info msg="CreateContainer within sandbox \"91b3c30a2f3012f860d9feea70c4d0953d58eeee8e01c45cfdc6570442a90a78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:19:15.781034 containerd[1474]: time="2025-05-17T00:19:15.780967244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:19:15.806737 containerd[1474]: time="2025-05-17T00:19:15.806419428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:15.806737 containerd[1474]: time="2025-05-17T00:19:15.806556430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:15.808286 containerd[1474]: time="2025-05-17T00:19:15.808170492Z" level=info msg="CreateContainer within sandbox \"91b3c30a2f3012f860d9feea70c4d0953d58eeee8e01c45cfdc6570442a90a78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d68b27e6bcca64235134dec1a8194b55f06fe02bf301788cb2bb5ba51619e93a\"" May 17 00:19:15.808457 containerd[1474]: time="2025-05-17T00:19:15.807977049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.808457 containerd[1474]: time="2025-05-17T00:19:15.808205052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:15.810675 containerd[1474]: time="2025-05-17T00:19:15.809352148Z" level=info msg="StartContainer for \"d68b27e6bcca64235134dec1a8194b55f06fe02bf301788cb2bb5ba51619e93a\"" May 17 00:19:15.831823 systemd[1]: Started cri-containerd-834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb.scope - libcontainer container 834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb. May 17 00:19:15.845759 systemd[1]: Started cri-containerd-d68b27e6bcca64235134dec1a8194b55f06fe02bf301788cb2bb5ba51619e93a.scope - libcontainer container d68b27e6bcca64235134dec1a8194b55f06fe02bf301788cb2bb5ba51619e93a. May 17 00:19:15.887596 containerd[1474]: time="2025-05-17T00:19:15.887429562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xt968,Uid:2a27214d-b7cb-4f59-b6c8-de777421fb64,Namespace:kube-system,Attempt:0,} returns sandbox id \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\"" May 17 00:19:15.898182 containerd[1474]: time="2025-05-17T00:19:15.898114746Z" level=info msg="StartContainer for \"d68b27e6bcca64235134dec1a8194b55f06fe02bf301788cb2bb5ba51619e93a\" returns successfully" May 17 00:19:16.634330 kubelet[2687]: I0517 00:19:16.633460 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m5ppc" podStartSLOduration=1.633414901 podStartE2EDuration="1.633414901s" podCreationTimestamp="2025-05-17 00:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:16.63333314 +0000 UTC m=+7.211781326" watchObservedRunningTime="2025-05-17 00:19:16.633414901 +0000 UTC m=+7.211863047" May 17 00:19:20.053135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835099784.mount: Deactivated successfully. May 17 00:19:21.376740 containerd[1474]: time="2025-05-17T00:19:21.376668284Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:19:21.378901 containerd[1474]: time="2025-05-17T00:19:21.378664748Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:19:21.379796 containerd[1474]: time="2025-05-17T00:19:21.379728120Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:19:21.382279 containerd[1474]: time="2025-05-17T00:19:21.381716743Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.600700698s" May 17 00:19:21.382279 containerd[1474]: time="2025-05-17T00:19:21.381766904Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:19:21.384671 containerd[1474]: time="2025-05-17T00:19:21.384580416Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:19:21.386020 containerd[1474]: time="2025-05-17T00:19:21.385629069Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:19:21.412415 containerd[1474]: time="2025-05-17T00:19:21.412329419Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\"" May 17 00:19:21.414874 containerd[1474]: time="2025-05-17T00:19:21.414811768Z" level=info msg="StartContainer for \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\"" May 17 00:19:21.449750 systemd[1]: Started cri-containerd-0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6.scope - libcontainer container 0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6. May 17 00:19:21.485693 containerd[1474]: time="2025-05-17T00:19:21.484744621Z" level=info msg="StartContainer for \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\" returns successfully" May 17 00:19:21.498256 systemd[1]: cri-containerd-0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6.scope: Deactivated successfully. May 17 00:19:21.524283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6-rootfs.mount: Deactivated successfully. May 17 00:19:21.701133 containerd[1474]: time="2025-05-17T00:19:21.701016217Z" level=info msg="shim disconnected" id=0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6 namespace=k8s.io May 17 00:19:21.701579 containerd[1474]: time="2025-05-17T00:19:21.701424861Z" level=warning msg="cleaning up after shim disconnected" id=0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6 namespace=k8s.io May 17 00:19:21.701579 containerd[1474]: time="2025-05-17T00:19:21.701448542Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:22.647631 containerd[1474]: time="2025-05-17T00:19:22.647163761Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:19:22.665785 containerd[1474]: time="2025-05-17T00:19:22.665719172Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\"" May 17 00:19:22.669533 containerd[1474]: time="2025-05-17T00:19:22.668662405Z" level=info msg="StartContainer for \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\"" May 17 00:19:22.702394 systemd[1]: run-containerd-runc-k8s.io-1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b-runc.O1MNtO.mount: Deactivated successfully. May 17 00:19:22.716405 systemd[1]: Started cri-containerd-1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b.scope - libcontainer container 1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b. May 17 00:19:22.748432 containerd[1474]: time="2025-05-17T00:19:22.748386390Z" level=info msg="StartContainer for \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\" returns successfully" May 17 00:19:22.765657 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:19:22.765927 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:22.766002 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:22.781233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:22.781489 systemd[1]: cri-containerd-1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b.scope: Deactivated successfully. May 17 00:19:22.809159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:22.825041 containerd[1474]: time="2025-05-17T00:19:22.824962339Z" level=info msg="shim disconnected" id=1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b namespace=k8s.io May 17 00:19:22.825677 containerd[1474]: time="2025-05-17T00:19:22.825430305Z" level=warning msg="cleaning up after shim disconnected" id=1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b namespace=k8s.io May 17 00:19:22.825677 containerd[1474]: time="2025-05-17T00:19:22.825460265Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:22.838569 containerd[1474]: time="2025-05-17T00:19:22.838515413Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:19:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:19:23.656427 containerd[1474]: time="2025-05-17T00:19:23.656269161Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:19:23.663347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b-rootfs.mount: Deactivated successfully. May 17 00:19:23.689821 containerd[1474]: time="2025-05-17T00:19:23.689768132Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\"" May 17 00:19:23.694619 containerd[1474]: time="2025-05-17T00:19:23.693541974Z" level=info msg="StartContainer for \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\"" May 17 00:19:23.748821 systemd[1]: Started cri-containerd-c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7.scope - libcontainer container c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7. May 17 00:19:23.770343 containerd[1474]: time="2025-05-17T00:19:23.770284144Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:19:23.772520 containerd[1474]: time="2025-05-17T00:19:23.772362087Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 17 00:19:23.773800 containerd[1474]: time="2025-05-17T00:19:23.773748263Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:19:23.777898 containerd[1474]: time="2025-05-17T00:19:23.777195621Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.392557284s" May 17 00:19:23.777898 containerd[1474]: time="2025-05-17T00:19:23.777613626Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:19:23.782931 containerd[1474]: time="2025-05-17T00:19:23.782882204Z" level=info msg="CreateContainer within sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:19:23.803287 containerd[1474]: time="2025-05-17T00:19:23.803230069Z" level=info msg="StartContainer for \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\" returns successfully" May 17 00:19:23.804286 systemd[1]: cri-containerd-c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7.scope: Deactivated successfully. May 17 00:19:23.807602 containerd[1474]: time="2025-05-17T00:19:23.806258903Z" level=info msg="CreateContainer within sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\"" May 17 00:19:23.811340 containerd[1474]: time="2025-05-17T00:19:23.811261278Z" level=info msg="StartContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\"" May 17 00:19:23.854743 systemd[1]: Started cri-containerd-811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0.scope - libcontainer container 811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0. May 17 00:19:23.879635 containerd[1474]: time="2025-05-17T00:19:23.879554995Z" level=info msg="shim disconnected" id=c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7 namespace=k8s.io May 17 00:19:23.880233 containerd[1474]: time="2025-05-17T00:19:23.879957560Z" level=warning msg="cleaning up after shim disconnected" id=c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7 namespace=k8s.io May 17 00:19:23.880233 containerd[1474]: time="2025-05-17T00:19:23.879983920Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:23.897817 containerd[1474]: time="2025-05-17T00:19:23.897686236Z" level=info msg="StartContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" returns successfully" May 17 00:19:24.662504 containerd[1474]: time="2025-05-17T00:19:24.661734691Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:19:24.663104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7-rootfs.mount: Deactivated successfully. May 17 00:19:24.685221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967800877.mount: Deactivated successfully. May 17 00:19:24.691194 containerd[1474]: time="2025-05-17T00:19:24.691127569Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\"" May 17 00:19:24.692148 containerd[1474]: time="2025-05-17T00:19:24.692057379Z" level=info msg="StartContainer for \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\"" May 17 00:19:24.744720 systemd[1]: Started cri-containerd-efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf.scope - libcontainer container efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf. May 17 00:19:24.761105 kubelet[2687]: I0517 00:19:24.761026 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xt968" podStartSLOduration=1.869947776 podStartE2EDuration="9.760995925s" podCreationTimestamp="2025-05-17 00:19:15 +0000 UTC" firstStartedPulling="2025-05-17 00:19:15.889373668 +0000 UTC m=+6.467821814" lastFinishedPulling="2025-05-17 00:19:23.780421817 +0000 UTC m=+14.358869963" observedRunningTime="2025-05-17 00:19:24.69955006 +0000 UTC m=+15.277998206" watchObservedRunningTime="2025-05-17 00:19:24.760995925 +0000 UTC m=+15.339444071" May 17 00:19:24.786652 systemd[1]: cri-containerd-efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf.scope: Deactivated successfully. May 17 00:19:24.789167 containerd[1474]: time="2025-05-17T00:19:24.789001028Z" level=info msg="StartContainer for \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\" returns successfully" May 17 00:19:24.823854 containerd[1474]: time="2025-05-17T00:19:24.823764644Z" level=info msg="shim disconnected" id=efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf namespace=k8s.io May 17 00:19:24.823854 containerd[1474]: time="2025-05-17T00:19:24.823832605Z" level=warning msg="cleaning up after shim disconnected" id=efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf namespace=k8s.io May 17 00:19:24.823854 containerd[1474]: time="2025-05-17T00:19:24.823841925Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:25.661769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf-rootfs.mount: Deactivated successfully. May 17 00:19:25.674799 containerd[1474]: time="2025-05-17T00:19:25.674730521Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:19:25.701078 containerd[1474]: time="2025-05-17T00:19:25.700905798Z" level=info msg="CreateContainer within sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\"" May 17 00:19:25.702108 containerd[1474]: time="2025-05-17T00:19:25.701940329Z" level=info msg="StartContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\"" May 17 00:19:25.744937 systemd[1]: Started cri-containerd-e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351.scope - libcontainer container e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351. May 17 00:19:25.783983 containerd[1474]: time="2025-05-17T00:19:25.783924235Z" level=info msg="StartContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" returns successfully" May 17 00:19:25.952651 kubelet[2687]: I0517 00:19:25.951670 2687 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:19:26.002223 systemd[1]: Created slice kubepods-burstable-pod7a58bf13_1e06_4aed_a6f3_0f034ebe1dc2.slice - libcontainer container kubepods-burstable-pod7a58bf13_1e06_4aed_a6f3_0f034ebe1dc2.slice. May 17 00:19:26.009445 systemd[1]: Created slice kubepods-burstable-pod4d6f09f7_f17f_4123_8d5c_927cd05d2716.slice - libcontainer container kubepods-burstable-pod4d6f09f7_f17f_4123_8d5c_927cd05d2716.slice. May 17 00:19:26.166527 kubelet[2687]: I0517 00:19:26.166482 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6f09f7-f17f-4123-8d5c-927cd05d2716-config-volume\") pod \"coredns-7c65d6cfc9-r9hp7\" (UID: \"4d6f09f7-f17f-4123-8d5c-927cd05d2716\") " pod="kube-system/coredns-7c65d6cfc9-r9hp7" May 17 00:19:26.166527 kubelet[2687]: I0517 00:19:26.166531 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j846b\" (UniqueName: \"kubernetes.io/projected/4d6f09f7-f17f-4123-8d5c-927cd05d2716-kube-api-access-j846b\") pod \"coredns-7c65d6cfc9-r9hp7\" (UID: \"4d6f09f7-f17f-4123-8d5c-927cd05d2716\") " pod="kube-system/coredns-7c65d6cfc9-r9hp7" May 17 00:19:26.166834 kubelet[2687]: I0517 00:19:26.166555 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2-config-volume\") pod \"coredns-7c65d6cfc9-z9k6x\" (UID: \"7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2\") " pod="kube-system/coredns-7c65d6cfc9-z9k6x" May 17 00:19:26.166834 kubelet[2687]: I0517 00:19:26.166573 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87z5\" (UniqueName: \"kubernetes.io/projected/7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2-kube-api-access-f87z5\") pod \"coredns-7c65d6cfc9-z9k6x\" (UID: \"7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2\") " pod="kube-system/coredns-7c65d6cfc9-z9k6x" May 17 00:19:26.309256 containerd[1474]: time="2025-05-17T00:19:26.309144190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z9k6x,Uid:7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2,Namespace:kube-system,Attempt:0,}" May 17 00:19:26.316101 containerd[1474]: time="2025-05-17T00:19:26.315774298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r9hp7,Uid:4d6f09f7-f17f-4123-8d5c-927cd05d2716,Namespace:kube-system,Attempt:0,}" May 17 00:19:26.669687 systemd[1]: run-containerd-runc-k8s.io-e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351-runc.SQB0kK.mount: Deactivated successfully. May 17 00:19:26.703191 kubelet[2687]: I0517 00:19:26.703025 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qnq9t" podStartSLOduration=6.09784918 podStartE2EDuration="11.703004335s" podCreationTimestamp="2025-05-17 00:19:15 +0000 UTC" firstStartedPulling="2025-05-17 00:19:15.77840833 +0000 UTC m=+6.356856476" lastFinishedPulling="2025-05-17 00:19:21.383563485 +0000 UTC m=+11.962011631" observedRunningTime="2025-05-17 00:19:26.700903953 +0000 UTC m=+17.279352099" watchObservedRunningTime="2025-05-17 00:19:26.703004335 +0000 UTC m=+17.281452481" May 17 00:19:28.005455 systemd-networkd[1375]: cilium_host: Link UP May 17 00:19:28.005967 systemd-networkd[1375]: cilium_net: Link UP May 17 00:19:28.005970 systemd-networkd[1375]: cilium_net: Gained carrier May 17 00:19:28.007750 systemd-networkd[1375]: cilium_host: Gained carrier May 17 00:19:28.122633 systemd-networkd[1375]: cilium_vxlan: Link UP May 17 00:19:28.122640 systemd-networkd[1375]: cilium_vxlan: Gained carrier May 17 00:19:28.417815 kernel: NET: Registered PF_ALG protocol family May 17 00:19:28.601798 systemd-networkd[1375]: cilium_host: Gained IPv6LL May 17 00:19:28.793678 systemd-networkd[1375]: cilium_net: Gained IPv6LL May 17 00:19:29.183069 systemd-networkd[1375]: lxc_health: Link UP May 17 00:19:29.186876 systemd-networkd[1375]: lxc_health: Gained carrier May 17 00:19:29.383716 kernel: eth0: renamed from tmp646f8 May 17 00:19:29.388856 systemd-networkd[1375]: lxceac949c619a1: Link UP May 17 00:19:29.389380 systemd-networkd[1375]: lxceac949c619a1: Gained carrier May 17 00:19:29.412906 kernel: eth0: renamed from tmp96489 May 17 00:19:29.411294 systemd-networkd[1375]: lxcef9a90d0d5af: Link UP May 17 00:19:29.419696 systemd-networkd[1375]: lxcef9a90d0d5af: Gained carrier May 17 00:19:29.563338 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL May 17 00:19:30.330186 systemd-networkd[1375]: lxc_health: Gained IPv6LL May 17 00:19:30.649912 systemd-networkd[1375]: lxcef9a90d0d5af: Gained IPv6LL May 17 00:19:30.714367 systemd-networkd[1375]: lxceac949c619a1: Gained IPv6LL May 17 00:19:33.622447 containerd[1474]: time="2025-05-17T00:19:33.622129988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:33.622447 containerd[1474]: time="2025-05-17T00:19:33.622189589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:33.622447 containerd[1474]: time="2025-05-17T00:19:33.622200469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:33.622447 containerd[1474]: time="2025-05-17T00:19:33.622297070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:33.661644 systemd[1]: run-containerd-runc-k8s.io-646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f-runc.2rwdhl.mount: Deactivated successfully. May 17 00:19:33.673760 systemd[1]: Started cri-containerd-646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f.scope - libcontainer container 646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f. May 17 00:19:33.677273 containerd[1474]: time="2025-05-17T00:19:33.677080312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:33.677273 containerd[1474]: time="2025-05-17T00:19:33.677263954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:33.677577 containerd[1474]: time="2025-05-17T00:19:33.677353275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:33.678473 containerd[1474]: time="2025-05-17T00:19:33.678382004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:33.714695 systemd[1]: Started cri-containerd-96489c05fb772df118172c3fdd92502abc3cf52e2714b74f89dd665f4325812e.scope - libcontainer container 96489c05fb772df118172c3fdd92502abc3cf52e2714b74f89dd665f4325812e. May 17 00:19:33.767733 containerd[1474]: time="2025-05-17T00:19:33.767683950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r9hp7,Uid:4d6f09f7-f17f-4123-8d5c-927cd05d2716,Namespace:kube-system,Attempt:0,} returns sandbox id \"96489c05fb772df118172c3fdd92502abc3cf52e2714b74f89dd665f4325812e\"" May 17 00:19:33.777178 containerd[1474]: time="2025-05-17T00:19:33.777012592Z" level=info msg="CreateContainer within sandbox \"96489c05fb772df118172c3fdd92502abc3cf52e2714b74f89dd665f4325812e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:19:33.796764 containerd[1474]: time="2025-05-17T00:19:33.796723966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z9k6x,Uid:7a58bf13-1e06-4aed-a6f3-0f034ebe1dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f\"" May 17 00:19:33.801751 containerd[1474]: time="2025-05-17T00:19:33.801398327Z" level=info msg="CreateContainer within sandbox \"646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:19:33.811927 containerd[1474]: time="2025-05-17T00:19:33.811606977Z" level=info msg="CreateContainer within sandbox \"96489c05fb772df118172c3fdd92502abc3cf52e2714b74f89dd665f4325812e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e66ed6ac8adc0a13e77fd56468b6bd6fdead546f0d50d9d7266b06fbb849349b\"" May 17 00:19:33.813896 containerd[1474]: time="2025-05-17T00:19:33.813591234Z" level=info msg="StartContainer for \"e66ed6ac8adc0a13e77fd56468b6bd6fdead546f0d50d9d7266b06fbb849349b\"" May 17 00:19:33.824783 containerd[1474]: time="2025-05-17T00:19:33.824728092Z" level=info msg="CreateContainer within sandbox \"646f8b0547da9b42deecbd082e7bd071de6f0b10b93a69445df791dd60e1982f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac7ffe670940f466d99aa0fbf5e8732942fc75c143a02e5e37b717e32b2c9f65\"" May 17 00:19:33.826344 containerd[1474]: time="2025-05-17T00:19:33.826305786Z" level=info msg="StartContainer for \"ac7ffe670940f466d99aa0fbf5e8732942fc75c143a02e5e37b717e32b2c9f65\"" May 17 00:19:33.851701 systemd[1]: Started cri-containerd-e66ed6ac8adc0a13e77fd56468b6bd6fdead546f0d50d9d7266b06fbb849349b.scope - libcontainer container e66ed6ac8adc0a13e77fd56468b6bd6fdead546f0d50d9d7266b06fbb849349b. May 17 00:19:33.875703 systemd[1]: Started cri-containerd-ac7ffe670940f466d99aa0fbf5e8732942fc75c143a02e5e37b717e32b2c9f65.scope - libcontainer container ac7ffe670940f466d99aa0fbf5e8732942fc75c143a02e5e37b717e32b2c9f65. May 17 00:19:33.895195 containerd[1474]: time="2025-05-17T00:19:33.894980431Z" level=info msg="StartContainer for \"e66ed6ac8adc0a13e77fd56468b6bd6fdead546f0d50d9d7266b06fbb849349b\" returns successfully" May 17 00:19:33.926754 containerd[1474]: time="2025-05-17T00:19:33.926659910Z" level=info msg="StartContainer for \"ac7ffe670940f466d99aa0fbf5e8732942fc75c143a02e5e37b717e32b2c9f65\" returns successfully" May 17 00:19:34.712678 kubelet[2687]: I0517 00:19:34.712556 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z9k6x" podStartSLOduration=19.712527814 podStartE2EDuration="19.712527814s" podCreationTimestamp="2025-05-17 00:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:34.71198053 +0000 UTC m=+25.290428756" watchObservedRunningTime="2025-05-17 00:19:34.712527814 +0000 UTC m=+25.290975960" May 17 00:19:36.339903 kubelet[2687]: I0517 00:19:36.339037 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r9hp7" podStartSLOduration=21.339007279 podStartE2EDuration="21.339007279s" podCreationTimestamp="2025-05-17 00:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:34.756324832 +0000 UTC m=+25.334772978" watchObservedRunningTime="2025-05-17 00:19:36.339007279 +0000 UTC m=+26.917455425" May 17 00:23:49.804023 systemd[1]: Started sshd@7-138.199.238.255:22-139.178.68.195:47118.service - OpenSSH per-connection server daemon (139.178.68.195:47118). May 17 00:23:50.780981 sshd[4103]: Accepted publickey for core from 139.178.68.195 port 47118 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:23:50.783877 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:50.789859 systemd-logind[1460]: New session 8 of user core. May 17 00:23:50.797955 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:23:51.557025 sshd[4103]: pam_unix(sshd:session): session closed for user core May 17 00:23:51.561529 systemd[1]: sshd@7-138.199.238.255:22-139.178.68.195:47118.service: Deactivated successfully. May 17 00:23:51.563926 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:23:51.566132 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. May 17 00:23:51.567673 systemd-logind[1460]: Removed session 8. May 17 00:23:56.742838 systemd[1]: Started sshd@8-138.199.238.255:22-139.178.68.195:50378.service - OpenSSH per-connection server daemon (139.178.68.195:50378). May 17 00:23:57.741912 sshd[4119]: Accepted publickey for core from 139.178.68.195 port 50378 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:23:57.744509 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:57.750573 systemd-logind[1460]: New session 9 of user core. May 17 00:23:57.755773 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:23:58.516597 sshd[4119]: pam_unix(sshd:session): session closed for user core May 17 00:23:58.521798 systemd[1]: sshd@8-138.199.238.255:22-139.178.68.195:50378.service: Deactivated successfully. May 17 00:23:58.524197 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:23:58.525902 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. May 17 00:23:58.527522 systemd-logind[1460]: Removed session 9. May 17 00:24:03.695888 systemd[1]: Started sshd@9-138.199.238.255:22-139.178.68.195:50380.service - OpenSSH per-connection server daemon (139.178.68.195:50380). May 17 00:24:04.673615 sshd[4132]: Accepted publickey for core from 139.178.68.195 port 50380 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:04.675617 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:04.681097 systemd-logind[1460]: New session 10 of user core. May 17 00:24:04.690786 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:24:05.432543 sshd[4132]: pam_unix(sshd:session): session closed for user core May 17 00:24:05.438973 systemd[1]: sshd@9-138.199.238.255:22-139.178.68.195:50380.service: Deactivated successfully. May 17 00:24:05.442384 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:24:05.443453 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. May 17 00:24:05.445080 systemd-logind[1460]: Removed session 10. May 17 00:24:05.613303 systemd[1]: Started sshd@10-138.199.238.255:22-139.178.68.195:41944.service - OpenSSH per-connection server daemon (139.178.68.195:41944). May 17 00:24:06.589925 sshd[4146]: Accepted publickey for core from 139.178.68.195 port 41944 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:06.592580 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:06.602702 systemd-logind[1460]: New session 11 of user core. May 17 00:24:06.611911 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:24:07.392278 sshd[4146]: pam_unix(sshd:session): session closed for user core May 17 00:24:07.397869 systemd[1]: sshd@10-138.199.238.255:22-139.178.68.195:41944.service: Deactivated successfully. May 17 00:24:07.399801 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:24:07.402629 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. May 17 00:24:07.404184 systemd-logind[1460]: Removed session 11. May 17 00:24:07.580995 systemd[1]: Started sshd@11-138.199.238.255:22-139.178.68.195:41950.service - OpenSSH per-connection server daemon (139.178.68.195:41950). May 17 00:24:08.582196 sshd[4156]: Accepted publickey for core from 139.178.68.195 port 41950 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:08.584917 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:08.590918 systemd-logind[1460]: New session 12 of user core. May 17 00:24:08.601621 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:24:09.358870 sshd[4156]: pam_unix(sshd:session): session closed for user core May 17 00:24:09.364792 systemd[1]: sshd@11-138.199.238.255:22-139.178.68.195:41950.service: Deactivated successfully. May 17 00:24:09.365765 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. May 17 00:24:09.369165 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:24:09.371633 systemd-logind[1460]: Removed session 12. May 17 00:24:14.537268 systemd[1]: Started sshd@12-138.199.238.255:22-139.178.68.195:42538.service - OpenSSH per-connection server daemon (139.178.68.195:42538). May 17 00:24:15.527199 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 42538 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:15.530108 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:15.535530 systemd-logind[1460]: New session 13 of user core. May 17 00:24:15.542803 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:24:16.281955 sshd[4171]: pam_unix(sshd:session): session closed for user core May 17 00:24:16.286945 systemd[1]: sshd@12-138.199.238.255:22-139.178.68.195:42538.service: Deactivated successfully. May 17 00:24:16.288779 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:24:16.289733 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. May 17 00:24:16.291003 systemd-logind[1460]: Removed session 13. May 17 00:24:16.460504 systemd[1]: Started sshd@13-138.199.238.255:22-139.178.68.195:42554.service - OpenSSH per-connection server daemon (139.178.68.195:42554). May 17 00:24:17.461304 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 42554 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:17.464985 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:17.473845 systemd-logind[1460]: New session 14 of user core. May 17 00:24:17.484793 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:24:18.287519 sshd[4187]: pam_unix(sshd:session): session closed for user core May 17 00:24:18.295871 systemd[1]: sshd@13-138.199.238.255:22-139.178.68.195:42554.service: Deactivated successfully. May 17 00:24:18.299449 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:24:18.302218 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. May 17 00:24:18.303336 systemd-logind[1460]: Removed session 14. May 17 00:24:18.465772 systemd[1]: Started sshd@14-138.199.238.255:22-139.178.68.195:42570.service - OpenSSH per-connection server daemon (139.178.68.195:42570). May 17 00:24:19.457580 sshd[4198]: Accepted publickey for core from 139.178.68.195 port 42570 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:19.460366 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:19.467521 systemd-logind[1460]: New session 15 of user core. May 17 00:24:19.470995 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:24:21.914590 sshd[4198]: pam_unix(sshd:session): session closed for user core May 17 00:24:21.919434 systemd[1]: sshd@14-138.199.238.255:22-139.178.68.195:42570.service: Deactivated successfully. May 17 00:24:21.922657 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:24:21.923948 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. May 17 00:24:21.925787 systemd-logind[1460]: Removed session 15. May 17 00:24:22.090047 systemd[1]: Started sshd@15-138.199.238.255:22-139.178.68.195:42574.service - OpenSSH per-connection server daemon (139.178.68.195:42574). May 17 00:24:23.093669 sshd[4216]: Accepted publickey for core from 139.178.68.195 port 42574 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:23.097078 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:23.103663 systemd-logind[1460]: New session 16 of user core. May 17 00:24:23.109801 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:24:23.976782 sshd[4216]: pam_unix(sshd:session): session closed for user core May 17 00:24:23.983283 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. May 17 00:24:23.984029 systemd[1]: sshd@15-138.199.238.255:22-139.178.68.195:42574.service: Deactivated successfully. May 17 00:24:23.986069 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:24:23.987156 systemd-logind[1460]: Removed session 16. May 17 00:24:24.152011 systemd[1]: Started sshd@16-138.199.238.255:22-139.178.68.195:47394.service - OpenSSH per-connection server daemon (139.178.68.195:47394). May 17 00:24:25.132256 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 47394 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:25.134713 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:25.139629 systemd-logind[1460]: New session 17 of user core. May 17 00:24:25.147873 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:24:25.879343 sshd[4226]: pam_unix(sshd:session): session closed for user core May 17 00:24:25.886884 systemd[1]: sshd@16-138.199.238.255:22-139.178.68.195:47394.service: Deactivated successfully. May 17 00:24:25.888983 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:24:25.891517 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. May 17 00:24:25.893287 systemd-logind[1460]: Removed session 17. May 17 00:24:31.063189 systemd[1]: Started sshd@17-138.199.238.255:22-139.178.68.195:47396.service - OpenSSH per-connection server daemon (139.178.68.195:47396). May 17 00:24:32.061781 sshd[4241]: Accepted publickey for core from 139.178.68.195 port 47396 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:32.063812 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:32.069836 systemd-logind[1460]: New session 18 of user core. May 17 00:24:32.074905 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:24:32.827239 sshd[4241]: pam_unix(sshd:session): session closed for user core May 17 00:24:32.833564 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. May 17 00:24:32.834306 systemd[1]: sshd@17-138.199.238.255:22-139.178.68.195:47396.service: Deactivated successfully. May 17 00:24:32.836805 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:24:32.839865 systemd-logind[1460]: Removed session 18. May 17 00:24:38.008595 systemd[1]: Started sshd@18-138.199.238.255:22-139.178.68.195:57960.service - OpenSSH per-connection server daemon (139.178.68.195:57960). May 17 00:24:38.998946 sshd[4256]: Accepted publickey for core from 139.178.68.195 port 57960 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:39.001408 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:39.008741 systemd-logind[1460]: New session 19 of user core. May 17 00:24:39.015868 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:24:39.767622 sshd[4256]: pam_unix(sshd:session): session closed for user core May 17 00:24:39.773506 systemd[1]: sshd@18-138.199.238.255:22-139.178.68.195:57960.service: Deactivated successfully. May 17 00:24:39.775961 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:24:39.777270 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. May 17 00:24:39.778898 systemd-logind[1460]: Removed session 19. May 17 00:24:39.938051 systemd[1]: Started sshd@19-138.199.238.255:22-139.178.68.195:57970.service - OpenSSH per-connection server daemon (139.178.68.195:57970). May 17 00:24:40.912766 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 57970 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:40.916324 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:40.923588 systemd-logind[1460]: New session 20 of user core. May 17 00:24:40.931756 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:24:43.270041 containerd[1474]: time="2025-05-17T00:24:43.269992598Z" level=info msg="StopContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" with timeout 30 (s)" May 17 00:24:43.272477 containerd[1474]: time="2025-05-17T00:24:43.272191044Z" level=info msg="Stop container \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" with signal terminated" May 17 00:24:43.281174 containerd[1474]: time="2025-05-17T00:24:43.280587507Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:24:43.290172 containerd[1474]: time="2025-05-17T00:24:43.290118933Z" level=info msg="StopContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" with timeout 2 (s)" May 17 00:24:43.290635 containerd[1474]: time="2025-05-17T00:24:43.290600934Z" level=info msg="Stop container \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" with signal terminated" May 17 00:24:43.296264 systemd[1]: cri-containerd-811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0.scope: Deactivated successfully. May 17 00:24:43.303073 systemd-networkd[1375]: lxc_health: Link DOWN May 17 00:24:43.303080 systemd-networkd[1375]: lxc_health: Lost carrier May 17 00:24:43.324341 systemd[1]: cri-containerd-e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351.scope: Deactivated successfully. May 17 00:24:43.324685 systemd[1]: cri-containerd-e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351.scope: Consumed 8.291s CPU time. May 17 00:24:43.330746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0-rootfs.mount: Deactivated successfully. May 17 00:24:43.340494 containerd[1474]: time="2025-05-17T00:24:43.340217389Z" level=info msg="shim disconnected" id=811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0 namespace=k8s.io May 17 00:24:43.340494 containerd[1474]: time="2025-05-17T00:24:43.340406949Z" level=warning msg="cleaning up after shim disconnected" id=811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0 namespace=k8s.io May 17 00:24:43.340494 containerd[1474]: time="2025-05-17T00:24:43.340417349Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:43.351303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351-rootfs.mount: Deactivated successfully. May 17 00:24:43.356487 containerd[1474]: time="2025-05-17T00:24:43.356377553Z" level=info msg="shim disconnected" id=e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351 namespace=k8s.io May 17 00:24:43.356810 containerd[1474]: time="2025-05-17T00:24:43.356497673Z" level=warning msg="cleaning up after shim disconnected" id=e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351 namespace=k8s.io May 17 00:24:43.356810 containerd[1474]: time="2025-05-17T00:24:43.356510593Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:43.364121 containerd[1474]: time="2025-05-17T00:24:43.363938453Z" level=info msg="StopContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" returns successfully" May 17 00:24:43.365351 containerd[1474]: time="2025-05-17T00:24:43.365122736Z" level=info msg="StopPodSandbox for \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\"" May 17 00:24:43.365351 containerd[1474]: time="2025-05-17T00:24:43.365176297Z" level=info msg="Container to stop \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.367421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb-shm.mount: Deactivated successfully. May 17 00:24:43.376733 systemd[1]: cri-containerd-834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb.scope: Deactivated successfully. May 17 00:24:43.395148 containerd[1474]: time="2025-05-17T00:24:43.395063178Z" level=info msg="StopContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" returns successfully" May 17 00:24:43.395944 containerd[1474]: time="2025-05-17T00:24:43.395856060Z" level=info msg="StopPodSandbox for \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\"" May 17 00:24:43.395944 containerd[1474]: time="2025-05-17T00:24:43.395940060Z" level=info msg="Container to stop \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.396091 containerd[1474]: time="2025-05-17T00:24:43.395957180Z" level=info msg="Container to stop \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.396091 containerd[1474]: time="2025-05-17T00:24:43.396002900Z" level=info msg="Container to stop \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.396091 containerd[1474]: time="2025-05-17T00:24:43.396019660Z" level=info msg="Container to stop \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.396091 containerd[1474]: time="2025-05-17T00:24:43.396030980Z" level=info msg="Container to stop \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:24:43.398294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf-shm.mount: Deactivated successfully. May 17 00:24:43.406125 systemd[1]: cri-containerd-11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf.scope: Deactivated successfully. May 17 00:24:43.428138 containerd[1474]: time="2025-05-17T00:24:43.427861547Z" level=info msg="shim disconnected" id=834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb namespace=k8s.io May 17 00:24:43.428138 containerd[1474]: time="2025-05-17T00:24:43.428078387Z" level=warning msg="cleaning up after shim disconnected" id=834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb namespace=k8s.io May 17 00:24:43.428138 containerd[1474]: time="2025-05-17T00:24:43.428095147Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:43.440477 containerd[1474]: time="2025-05-17T00:24:43.440227460Z" level=info msg="shim disconnected" id=11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf namespace=k8s.io May 17 00:24:43.440477 containerd[1474]: time="2025-05-17T00:24:43.440293380Z" level=warning msg="cleaning up after shim disconnected" id=11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf namespace=k8s.io May 17 00:24:43.440477 containerd[1474]: time="2025-05-17T00:24:43.440303020Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:43.450217 containerd[1474]: time="2025-05-17T00:24:43.449912646Z" level=info msg="TearDown network for sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" successfully" May 17 00:24:43.450217 containerd[1474]: time="2025-05-17T00:24:43.449962327Z" level=info msg="StopPodSandbox for \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" returns successfully" May 17 00:24:43.457084 containerd[1474]: time="2025-05-17T00:24:43.456851585Z" level=info msg="TearDown network for sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" successfully" May 17 00:24:43.457084 containerd[1474]: time="2025-05-17T00:24:43.456885145Z" level=info msg="StopPodSandbox for \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" returns successfully" May 17 00:24:43.498103 kubelet[2687]: I0517 00:24:43.498039 2687 scope.go:117] "RemoveContainer" containerID="811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0" May 17 00:24:43.504211 containerd[1474]: time="2025-05-17T00:24:43.504173034Z" level=info msg="RemoveContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\"" May 17 00:24:43.513938 containerd[1474]: time="2025-05-17T00:24:43.513873460Z" level=info msg="RemoveContainer for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" returns successfully" May 17 00:24:43.514312 kubelet[2687]: I0517 00:24:43.514274 2687 scope.go:117] "RemoveContainer" containerID="811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0" May 17 00:24:43.515636 kubelet[2687]: E0517 00:24:43.515218 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\": not found" containerID="811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0" May 17 00:24:43.515636 kubelet[2687]: I0517 00:24:43.515249 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0"} err="failed to get container status \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\": not found" May 17 00:24:43.515636 kubelet[2687]: I0517 00:24:43.515326 2687 scope.go:117] "RemoveContainer" containerID="e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351" May 17 00:24:43.515729 containerd[1474]: time="2025-05-17T00:24:43.514630862Z" level=error msg="ContainerStatus for \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"811ddaabb19aab6f183adf0347b16aaab0dc73c8c1d83e5472bcb95718efc2c0\": not found" May 17 00:24:43.517541 containerd[1474]: time="2025-05-17T00:24:43.517374269Z" level=info msg="RemoveContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\"" May 17 00:24:43.521254 containerd[1474]: time="2025-05-17T00:24:43.521044959Z" level=info msg="RemoveContainer for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" returns successfully" May 17 00:24:43.522161 kubelet[2687]: I0517 00:24:43.521584 2687 scope.go:117] "RemoveContainer" containerID="efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf" May 17 00:24:43.523598 containerd[1474]: time="2025-05-17T00:24:43.523520966Z" level=info msg="RemoveContainer for \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\"" May 17 00:24:43.527122 containerd[1474]: time="2025-05-17T00:24:43.527035336Z" level=info msg="RemoveContainer for \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\" returns successfully" May 17 00:24:43.527772 kubelet[2687]: I0517 00:24:43.527599 2687 scope.go:117] "RemoveContainer" containerID="c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7" May 17 00:24:43.529904 containerd[1474]: time="2025-05-17T00:24:43.529843943Z" level=info msg="RemoveContainer for \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\"" May 17 00:24:43.533781 containerd[1474]: time="2025-05-17T00:24:43.533702634Z" level=info msg="RemoveContainer for \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\" returns successfully" May 17 00:24:43.534493 kubelet[2687]: I0517 00:24:43.533984 2687 scope.go:117] "RemoveContainer" containerID="1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b" May 17 00:24:43.535995 containerd[1474]: time="2025-05-17T00:24:43.535909320Z" level=info msg="RemoveContainer for \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\"" May 17 00:24:43.539953 containerd[1474]: time="2025-05-17T00:24:43.539912330Z" level=info msg="RemoveContainer for \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\" returns successfully" May 17 00:24:43.541621 kubelet[2687]: I0517 00:24:43.541593 2687 scope.go:117] "RemoveContainer" containerID="0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6" May 17 00:24:43.544162 containerd[1474]: time="2025-05-17T00:24:43.544065462Z" level=info msg="RemoveContainer for \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\"" May 17 00:24:43.550711 containerd[1474]: time="2025-05-17T00:24:43.550164078Z" level=info msg="RemoveContainer for \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\" returns successfully" May 17 00:24:43.551601 kubelet[2687]: I0517 00:24:43.551324 2687 scope.go:117] "RemoveContainer" containerID="e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351" May 17 00:24:43.552052 containerd[1474]: time="2025-05-17T00:24:43.551856083Z" level=error msg="ContainerStatus for \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\": not found" May 17 00:24:43.552300 kubelet[2687]: E0517 00:24:43.552139 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\": not found" containerID="e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351" May 17 00:24:43.552300 kubelet[2687]: I0517 00:24:43.552191 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351"} err="failed to get container status \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3c928b701fae4be42228a25ab6d8bbba068e2ee070051ca932690eca27f4351\": not found" May 17 00:24:43.552300 kubelet[2687]: I0517 00:24:43.552233 2687 scope.go:117] "RemoveContainer" containerID="efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf" May 17 00:24:43.552868 containerd[1474]: time="2025-05-17T00:24:43.552616565Z" level=error msg="ContainerStatus for \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\": not found" May 17 00:24:43.552938 kubelet[2687]: E0517 00:24:43.552743 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\": not found" containerID="efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf" May 17 00:24:43.552938 kubelet[2687]: I0517 00:24:43.552778 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf"} err="failed to get container status \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"efe78928673233e67267589c45aec98c4a2276afa376df459ab5c1518ae117cf\": not found" May 17 00:24:43.552938 kubelet[2687]: I0517 00:24:43.552795 2687 scope.go:117] "RemoveContainer" containerID="c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7" May 17 00:24:43.553402 containerd[1474]: time="2025-05-17T00:24:43.553177046Z" level=error msg="ContainerStatus for \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\": not found" May 17 00:24:43.553493 kubelet[2687]: E0517 00:24:43.553325 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\": not found" containerID="c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7" May 17 00:24:43.553493 kubelet[2687]: I0517 00:24:43.553346 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7"} err="failed to get container status \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9d4a488cd26d4ad7b749d29763d7adadec0ba9b679bb4dc6e405f9a7bd8a9d7\": not found" May 17 00:24:43.553493 kubelet[2687]: I0517 00:24:43.553362 2687 scope.go:117] "RemoveContainer" containerID="1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b" May 17 00:24:43.554040 containerd[1474]: time="2025-05-17T00:24:43.553887808Z" level=error msg="ContainerStatus for \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\": not found" May 17 00:24:43.554201 kubelet[2687]: E0517 00:24:43.554121 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\": not found" containerID="1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b" May 17 00:24:43.554201 kubelet[2687]: I0517 00:24:43.554147 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b"} err="failed to get container status \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e60e1a3f5190290a91d943e3ee08cd6445338fde6de36bc387030973b80855b\": not found" May 17 00:24:43.554201 kubelet[2687]: I0517 00:24:43.554167 2687 scope.go:117] "RemoveContainer" containerID="0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6" May 17 00:24:43.554765 containerd[1474]: time="2025-05-17T00:24:43.554449570Z" level=error msg="ContainerStatus for \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\": not found" May 17 00:24:43.554868 kubelet[2687]: E0517 00:24:43.554726 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\": not found" containerID="0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6" May 17 00:24:43.554952 kubelet[2687]: I0517 00:24:43.554750 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6"} err="failed to get container status \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0871bf5b5d14b25d7d8c04d327366c675efbf59ddfc133d5d356d9fb0f53a8f6\": not found" May 17 00:24:43.559627 kubelet[2687]: I0517 00:24:43.559589 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cni-path\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559661 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-lib-modules\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559681 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-run\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559704 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b6815f8-f74d-4c8f-8edc-714d706aeb05-clustermesh-secrets\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559756 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nk8\" (UniqueName: \"kubernetes.io/projected/2a27214d-b7cb-4f59-b6c8-de777421fb64-kube-api-access-d4nk8\") pod \"2a27214d-b7cb-4f59-b6c8-de777421fb64\" (UID: \"2a27214d-b7cb-4f59-b6c8-de777421fb64\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559792 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hubble-tls\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.559838 kubelet[2687]: I0517 00:24:43.559812 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gs7x\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-kube-api-access-5gs7x\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559827 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-etc-cni-netd\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559842 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-cgroup\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559856 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-bpf-maps\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559873 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-config-path\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559890 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a27214d-b7cb-4f59-b6c8-de777421fb64-cilium-config-path\") pod \"2a27214d-b7cb-4f59-b6c8-de777421fb64\" (UID: \"2a27214d-b7cb-4f59-b6c8-de777421fb64\") " May 17 00:24:43.560064 kubelet[2687]: I0517 00:24:43.559904 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-net\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560199 kubelet[2687]: I0517 00:24:43.559920 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hostproc\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560199 kubelet[2687]: I0517 00:24:43.559936 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-xtables-lock\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560199 kubelet[2687]: I0517 00:24:43.559952 2687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-kernel\") pod \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\" (UID: \"5b6815f8-f74d-4c8f-8edc-714d706aeb05\") " May 17 00:24:43.560199 kubelet[2687]: I0517 00:24:43.560035 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.560199 kubelet[2687]: I0517 00:24:43.560074 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cni-path" (OuterVolumeSpecName: "cni-path") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.560309 kubelet[2687]: I0517 00:24:43.560089 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.560309 kubelet[2687]: I0517 00:24:43.560105 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.561269 kubelet[2687]: I0517 00:24:43.560518 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.566634 kubelet[2687]: I0517 00:24:43.566567 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.567087 kubelet[2687]: I0517 00:24:43.566872 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.571227 kubelet[2687]: I0517 00:24:43.571166 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-kube-api-access-5gs7x" (OuterVolumeSpecName: "kube-api-access-5gs7x") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "kube-api-access-5gs7x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:24:43.574395 kubelet[2687]: I0517 00:24:43.574148 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.574395 kubelet[2687]: I0517 00:24:43.574213 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hostproc" (OuterVolumeSpecName: "hostproc") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.574395 kubelet[2687]: I0517 00:24:43.574247 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:24:43.575009 kubelet[2687]: I0517 00:24:43.574895 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a27214d-b7cb-4f59-b6c8-de777421fb64-kube-api-access-d4nk8" (OuterVolumeSpecName: "kube-api-access-d4nk8") pod "2a27214d-b7cb-4f59-b6c8-de777421fb64" (UID: "2a27214d-b7cb-4f59-b6c8-de777421fb64"). InnerVolumeSpecName "kube-api-access-d4nk8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:24:43.575415 kubelet[2687]: I0517 00:24:43.575231 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b6815f8-f74d-4c8f-8edc-714d706aeb05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:24:43.576202 kubelet[2687]: I0517 00:24:43.575942 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:24:43.577227 kubelet[2687]: I0517 00:24:43.577187 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a27214d-b7cb-4f59-b6c8-de777421fb64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a27214d-b7cb-4f59-b6c8-de777421fb64" (UID: "2a27214d-b7cb-4f59-b6c8-de777421fb64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:24:43.578150 kubelet[2687]: I0517 00:24:43.578108 2687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5b6815f8-f74d-4c8f-8edc-714d706aeb05" (UID: "5b6815f8-f74d-4c8f-8edc-714d706aeb05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:24:43.660645 kubelet[2687]: I0517 00:24:43.660593 2687 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cni-path\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.660645 kubelet[2687]: I0517 00:24:43.660643 2687 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-lib-modules\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.660645 kubelet[2687]: I0517 00:24:43.660671 2687 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-run\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660689 2687 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b6815f8-f74d-4c8f-8edc-714d706aeb05-clustermesh-secrets\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660707 2687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nk8\" (UniqueName: \"kubernetes.io/projected/2a27214d-b7cb-4f59-b6c8-de777421fb64-kube-api-access-d4nk8\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660725 2687 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hubble-tls\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660742 2687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gs7x\" (UniqueName: \"kubernetes.io/projected/5b6815f8-f74d-4c8f-8edc-714d706aeb05-kube-api-access-5gs7x\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660758 2687 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-etc-cni-netd\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660775 2687 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-cgroup\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660792 2687 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-bpf-maps\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661012 kubelet[2687]: I0517 00:24:43.660810 2687 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b6815f8-f74d-4c8f-8edc-714d706aeb05-cilium-config-path\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661532 kubelet[2687]: I0517 00:24:43.660829 2687 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a27214d-b7cb-4f59-b6c8-de777421fb64-cilium-config-path\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661532 kubelet[2687]: I0517 00:24:43.660846 2687 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-net\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661532 kubelet[2687]: I0517 00:24:43.660862 2687 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-hostproc\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661532 kubelet[2687]: I0517 00:24:43.660895 2687 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-xtables-lock\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.661532 kubelet[2687]: I0517 00:24:43.660912 2687 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b6815f8-f74d-4c8f-8edc-714d706aeb05-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-0eec03f1fd\" DevicePath \"\"" May 17 00:24:43.807384 systemd[1]: Removed slice kubepods-besteffort-pod2a27214d_b7cb_4f59_b6c8_de777421fb64.slice - libcontainer container kubepods-besteffort-pod2a27214d_b7cb_4f59_b6c8_de777421fb64.slice. May 17 00:24:43.818323 systemd[1]: Removed slice kubepods-burstable-pod5b6815f8_f74d_4c8f_8edc_714d706aeb05.slice - libcontainer container kubepods-burstable-pod5b6815f8_f74d_4c8f_8edc_714d706aeb05.slice. May 17 00:24:43.818524 systemd[1]: kubepods-burstable-pod5b6815f8_f74d_4c8f_8edc_714d706aeb05.slice: Consumed 8.388s CPU time. May 17 00:24:44.261956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb-rootfs.mount: Deactivated successfully. May 17 00:24:44.262230 systemd[1]: var-lib-kubelet-pods-2a27214d\x2db7cb\x2d4f59\x2db6c8\x2dde777421fb64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd4nk8.mount: Deactivated successfully. May 17 00:24:44.262385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf-rootfs.mount: Deactivated successfully. May 17 00:24:44.262492 systemd[1]: var-lib-kubelet-pods-5b6815f8\x2df74d\x2d4c8f\x2d8edc\x2d714d706aeb05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5gs7x.mount: Deactivated successfully. May 17 00:24:44.262576 systemd[1]: var-lib-kubelet-pods-5b6815f8\x2df74d\x2d4c8f\x2d8edc\x2d714d706aeb05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:24:44.262663 systemd[1]: var-lib-kubelet-pods-5b6815f8\x2df74d\x2d4c8f\x2d8edc\x2d714d706aeb05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:24:44.747793 kubelet[2687]: E0517 00:24:44.747702 2687 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:24:45.342909 sshd[4269]: pam_unix(sshd:session): session closed for user core May 17 00:24:45.349098 systemd[1]: sshd@19-138.199.238.255:22-139.178.68.195:57970.service: Deactivated successfully. May 17 00:24:45.352080 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:24:45.353746 systemd[1]: session-20.scope: Consumed 1.186s CPU time. May 17 00:24:45.354696 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. May 17 00:24:45.356054 systemd-logind[1460]: Removed session 20. May 17 00:24:45.520020 systemd[1]: Started sshd@20-138.199.238.255:22-139.178.68.195:58522.service - OpenSSH per-connection server daemon (139.178.68.195:58522). May 17 00:24:45.558498 kubelet[2687]: I0517 00:24:45.557633 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a27214d-b7cb-4f59-b6c8-de777421fb64" path="/var/lib/kubelet/pods/2a27214d-b7cb-4f59-b6c8-de777421fb64/volumes" May 17 00:24:45.558498 kubelet[2687]: I0517 00:24:45.558120 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" path="/var/lib/kubelet/pods/5b6815f8-f74d-4c8f-8edc-714d706aeb05/volumes" May 17 00:24:46.511707 sshd[4430]: Accepted publickey for core from 139.178.68.195 port 58522 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:46.514045 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:46.519855 systemd-logind[1460]: New session 21 of user core. May 17 00:24:46.525102 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:24:47.229606 kubelet[2687]: I0517 00:24:47.227835 2687 setters.go:600] "Node became not ready" node="ci-4081-3-3-n-0eec03f1fd" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:24:47Z","lastTransitionTime":"2025-05-17T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014704 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a27214d-b7cb-4f59-b6c8-de777421fb64" containerName="cilium-operator" May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014742 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="mount-cgroup" May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014751 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="apply-sysctl-overwrites" May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014757 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="mount-bpf-fs" May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014763 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="clean-cilium-state" May 17 00:24:49.015141 kubelet[2687]: E0517 00:24:49.014769 2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="cilium-agent" May 17 00:24:49.015141 kubelet[2687]: I0517 00:24:49.014797 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6815f8-f74d-4c8f-8edc-714d706aeb05" containerName="cilium-agent" May 17 00:24:49.015141 kubelet[2687]: I0517 00:24:49.014804 2687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a27214d-b7cb-4f59-b6c8-de777421fb64" containerName="cilium-operator" May 17 00:24:49.023663 systemd[1]: Created slice kubepods-burstable-podf2ec2537_ca4c_428e_bc71_4e8b1ac89e43.slice - libcontainer container kubepods-burstable-podf2ec2537_ca4c_428e_bc71_4e8b1ac89e43.slice. May 17 00:24:49.096078 kubelet[2687]: I0517 00:24:49.096025 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-etc-cni-netd\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096428 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-xtables-lock\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096508 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-hostproc\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096535 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-cilium-cgroup\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096577 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-host-proc-sys-kernel\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096601 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-cilium-run\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096708 kubelet[2687]: I0517 00:24:49.096621 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-lib-modules\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096904 kubelet[2687]: I0517 00:24:49.096667 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-hubble-tls\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.096904 kubelet[2687]: I0517 00:24:49.096689 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-clustermesh-secrets\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097057 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-cilium-config-path\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097090 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-cilium-ipsec-secrets\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097140 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9bg\" (UniqueName: \"kubernetes.io/projected/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-kube-api-access-fg9bg\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097166 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-cni-path\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097210 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-bpf-maps\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.097576 kubelet[2687]: I0517 00:24:49.097229 2687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2ec2537-ca4c-428e-bc71-4e8b1ac89e43-host-proc-sys-net\") pod \"cilium-ftzm8\" (UID: \"f2ec2537-ca4c-428e-bc71-4e8b1ac89e43\") " pod="kube-system/cilium-ftzm8" May 17 00:24:49.167715 sshd[4430]: pam_unix(sshd:session): session closed for user core May 17 00:24:49.173170 systemd[1]: sshd@20-138.199.238.255:22-139.178.68.195:58522.service: Deactivated successfully. May 17 00:24:49.176997 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:24:49.177390 systemd[1]: session-21.scope: Consumed 1.838s CPU time. May 17 00:24:49.178437 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. May 17 00:24:49.179589 systemd-logind[1460]: Removed session 21. May 17 00:24:49.330746 containerd[1474]: time="2025-05-17T00:24:49.330595077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftzm8,Uid:f2ec2537-ca4c-428e-bc71-4e8b1ac89e43,Namespace:kube-system,Attempt:0,}" May 17 00:24:49.347984 systemd[1]: Started sshd@21-138.199.238.255:22-139.178.68.195:58524.service - OpenSSH per-connection server daemon (139.178.68.195:58524). May 17 00:24:49.359288 containerd[1474]: time="2025-05-17T00:24:49.359171954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:49.359527 containerd[1474]: time="2025-05-17T00:24:49.359317034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:49.359527 containerd[1474]: time="2025-05-17T00:24:49.359336474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:49.359601 containerd[1474]: time="2025-05-17T00:24:49.359515395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:49.379766 systemd[1]: Started cri-containerd-d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2.scope - libcontainer container d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2. May 17 00:24:49.415810 containerd[1474]: time="2025-05-17T00:24:49.415761588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftzm8,Uid:f2ec2537-ca4c-428e-bc71-4e8b1ac89e43,Namespace:kube-system,Attempt:0,} returns sandbox id \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\"" May 17 00:24:49.424532 containerd[1474]: time="2025-05-17T00:24:49.424458411Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:24:49.437979 containerd[1474]: time="2025-05-17T00:24:49.437906688Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a\"" May 17 00:24:49.440523 containerd[1474]: time="2025-05-17T00:24:49.440379094Z" level=info msg="StartContainer for \"8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a\"" May 17 00:24:49.485134 systemd[1]: Started cri-containerd-8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a.scope - libcontainer container 8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a. May 17 00:24:49.542743 containerd[1474]: time="2025-05-17T00:24:49.542691212Z" level=info msg="StartContainer for \"8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a\" returns successfully" May 17 00:24:49.569797 systemd[1]: cri-containerd-8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a.scope: Deactivated successfully. May 17 00:24:49.608598 containerd[1474]: time="2025-05-17T00:24:49.608212830Z" level=info msg="shim disconnected" id=8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a namespace=k8s.io May 17 00:24:49.608598 containerd[1474]: time="2025-05-17T00:24:49.608323790Z" level=warning msg="cleaning up after shim disconnected" id=8ef6858e71b60afc509ff1febb2281abaead9f314b20d340a733c317d3a1674a namespace=k8s.io May 17 00:24:49.608598 containerd[1474]: time="2025-05-17T00:24:49.608343550Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:49.749232 kubelet[2687]: E0517 00:24:49.749128 2687 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:24:50.345739 sshd[4450]: Accepted publickey for core from 139.178.68.195 port 58524 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:50.347685 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:50.354997 systemd-logind[1460]: New session 22 of user core. May 17 00:24:50.362786 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:24:50.548684 containerd[1474]: time="2025-05-17T00:24:50.548639700Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:24:50.565899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210355081.mount: Deactivated successfully. May 17 00:24:50.569177 containerd[1474]: time="2025-05-17T00:24:50.569045716Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd\"" May 17 00:24:50.569786 containerd[1474]: time="2025-05-17T00:24:50.569706757Z" level=info msg="StartContainer for \"272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd\"" May 17 00:24:50.611700 systemd[1]: Started cri-containerd-272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd.scope - libcontainer container 272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd. May 17 00:24:50.647390 containerd[1474]: time="2025-05-17T00:24:50.647238928Z" level=info msg="StartContainer for \"272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd\" returns successfully" May 17 00:24:50.655799 systemd[1]: cri-containerd-272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd.scope: Deactivated successfully. May 17 00:24:50.685338 containerd[1474]: time="2025-05-17T00:24:50.685056790Z" level=info msg="shim disconnected" id=272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd namespace=k8s.io May 17 00:24:50.685338 containerd[1474]: time="2025-05-17T00:24:50.685176311Z" level=warning msg="cleaning up after shim disconnected" id=272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd namespace=k8s.io May 17 00:24:50.685338 containerd[1474]: time="2025-05-17T00:24:50.685198071Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:51.031922 sshd[4450]: pam_unix(sshd:session): session closed for user core May 17 00:24:51.038631 systemd[1]: sshd@21-138.199.238.255:22-139.178.68.195:58524.service: Deactivated successfully. May 17 00:24:51.041058 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:24:51.043379 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. May 17 00:24:51.045411 systemd-logind[1460]: Removed session 22. May 17 00:24:51.206303 systemd[1]: run-containerd-runc-k8s.io-272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd-runc.pQ5ZSW.mount: Deactivated successfully. May 17 00:24:51.206426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-272079571ccb39b03d2af8cd0d49352a0dff97434f9196210fbf75fc36584dcd-rootfs.mount: Deactivated successfully. May 17 00:24:51.216110 systemd[1]: Started sshd@22-138.199.238.255:22-139.178.68.195:58534.service - OpenSSH per-connection server daemon (139.178.68.195:58534). May 17 00:24:51.559542 containerd[1474]: time="2025-05-17T00:24:51.559359482Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:24:51.583877 containerd[1474]: time="2025-05-17T00:24:51.583550027Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed\"" May 17 00:24:51.585648 containerd[1474]: time="2025-05-17T00:24:51.584345909Z" level=info msg="StartContainer for \"50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed\"" May 17 00:24:51.620833 systemd[1]: Started cri-containerd-50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed.scope - libcontainer container 50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed. May 17 00:24:51.660732 systemd[1]: cri-containerd-50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed.scope: Deactivated successfully. May 17 00:24:51.661105 containerd[1474]: time="2025-05-17T00:24:51.660924877Z" level=info msg="StartContainer for \"50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed\" returns successfully" May 17 00:24:51.696578 containerd[1474]: time="2025-05-17T00:24:51.696397813Z" level=info msg="shim disconnected" id=50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed namespace=k8s.io May 17 00:24:51.696578 containerd[1474]: time="2025-05-17T00:24:51.696566374Z" level=warning msg="cleaning up after shim disconnected" id=50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed namespace=k8s.io May 17 00:24:51.696578 containerd[1474]: time="2025-05-17T00:24:51.696584334Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:52.199860 sshd[4618]: Accepted publickey for core from 139.178.68.195 port 58534 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:24:52.203235 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:52.205788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50502be0d7d49e430bc4e7dd23acee505593ed070794412e2919a292608d65ed-rootfs.mount: Deactivated successfully. May 17 00:24:52.210491 systemd-logind[1460]: New session 23 of user core. May 17 00:24:52.216663 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:24:52.561953 containerd[1474]: time="2025-05-17T00:24:52.561825120Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:24:52.580108 containerd[1474]: time="2025-05-17T00:24:52.579836649Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96\"" May 17 00:24:52.580684 containerd[1474]: time="2025-05-17T00:24:52.580484331Z" level=info msg="StartContainer for \"9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96\"" May 17 00:24:52.616868 systemd[1]: Started cri-containerd-9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96.scope - libcontainer container 9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96. May 17 00:24:52.650199 systemd[1]: cri-containerd-9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96.scope: Deactivated successfully. May 17 00:24:52.652601 containerd[1474]: time="2025-05-17T00:24:52.652550447Z" level=info msg="StartContainer for \"9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96\" returns successfully" May 17 00:24:52.683258 containerd[1474]: time="2025-05-17T00:24:52.683184410Z" level=info msg="shim disconnected" id=9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96 namespace=k8s.io May 17 00:24:52.683258 containerd[1474]: time="2025-05-17T00:24:52.683250050Z" level=warning msg="cleaning up after shim disconnected" id=9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96 namespace=k8s.io May 17 00:24:52.683258 containerd[1474]: time="2025-05-17T00:24:52.683259290Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:53.207004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f350c298c7cc2c1696145fa6140970d20a31581177689f202b84bc96f8e7e96-rootfs.mount: Deactivated successfully. May 17 00:24:53.568558 containerd[1474]: time="2025-05-17T00:24:53.568343850Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:24:53.593355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003745081.mount: Deactivated successfully. May 17 00:24:53.598442 containerd[1474]: time="2025-05-17T00:24:53.598306652Z" level=info msg="CreateContainer within sandbox \"d48f5017e1b1a5ddd582e6f2a04510589b01f593f3207610dff1eefc78772ac2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec\"" May 17 00:24:53.599137 containerd[1474]: time="2025-05-17T00:24:53.599101494Z" level=info msg="StartContainer for \"0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec\"" May 17 00:24:53.636702 systemd[1]: Started cri-containerd-0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec.scope - libcontainer container 0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec. May 17 00:24:53.668598 containerd[1474]: time="2025-05-17T00:24:53.668528042Z" level=info msg="StartContainer for \"0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec\" returns successfully" May 17 00:24:53.987866 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:24:54.592505 kubelet[2687]: I0517 00:24:54.592375 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ftzm8" podStartSLOduration=6.592346308 podStartE2EDuration="6.592346308s" podCreationTimestamp="2025-05-17 00:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:54.592046747 +0000 UTC m=+345.170494893" watchObservedRunningTime="2025-05-17 00:24:54.592346308 +0000 UTC m=+345.170794894" May 17 00:24:57.042317 systemd-networkd[1375]: lxc_health: Link UP May 17 00:24:57.061978 systemd-networkd[1375]: lxc_health: Gained carrier May 17 00:24:58.585687 systemd-networkd[1375]: lxc_health: Gained IPv6LL May 17 00:24:59.294626 systemd[1]: run-containerd-runc-k8s.io-0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec-runc.b4TOBA.mount: Deactivated successfully. May 17 00:25:01.496013 systemd[1]: run-containerd-runc-k8s.io-0fb085c222134e5ed7d11213d929bd911cd40c117796e79974dea9b847e7f6ec-runc.BJrF4S.mount: Deactivated successfully. May 17 00:25:03.928960 sshd[4618]: pam_unix(sshd:session): session closed for user core May 17 00:25:03.935207 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. May 17 00:25:03.935931 systemd[1]: sshd@22-138.199.238.255:22-139.178.68.195:58534.service: Deactivated successfully. May 17 00:25:03.941069 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:25:03.943836 systemd-logind[1460]: Removed session 23. May 17 00:25:09.568278 containerd[1474]: time="2025-05-17T00:25:09.568212405Z" level=info msg="StopPodSandbox for \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\"" May 17 00:25:09.569199 containerd[1474]: time="2025-05-17T00:25:09.569033887Z" level=info msg="TearDown network for sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" successfully" May 17 00:25:09.569199 containerd[1474]: time="2025-05-17T00:25:09.569077367Z" level=info msg="StopPodSandbox for \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" returns successfully" May 17 00:25:09.570341 containerd[1474]: time="2025-05-17T00:25:09.570286851Z" level=info msg="RemovePodSandbox for \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\"" May 17 00:25:09.570341 containerd[1474]: time="2025-05-17T00:25:09.570341491Z" level=info msg="Forcibly stopping sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\"" May 17 00:25:09.570452 containerd[1474]: time="2025-05-17T00:25:09.570417211Z" level=info msg="TearDown network for sandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" successfully" May 17 00:25:09.575609 containerd[1474]: time="2025-05-17T00:25:09.575534505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:09.575767 containerd[1474]: time="2025-05-17T00:25:09.575660625Z" level=info msg="RemovePodSandbox \"11cc1f04137d85427d920bb25a47d938bc36056147d4307fef5678fa3b8164cf\" returns successfully" May 17 00:25:09.576787 containerd[1474]: time="2025-05-17T00:25:09.576738788Z" level=info msg="StopPodSandbox for \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\"" May 17 00:25:09.577118 containerd[1474]: time="2025-05-17T00:25:09.576896229Z" level=info msg="TearDown network for sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" successfully" May 17 00:25:09.577118 containerd[1474]: time="2025-05-17T00:25:09.577018269Z" level=info msg="StopPodSandbox for \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" returns successfully" May 17 00:25:09.578264 containerd[1474]: time="2025-05-17T00:25:09.578197672Z" level=info msg="RemovePodSandbox for \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\"" May 17 00:25:09.578264 containerd[1474]: time="2025-05-17T00:25:09.578243032Z" level=info msg="Forcibly stopping sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\"" May 17 00:25:09.578409 containerd[1474]: time="2025-05-17T00:25:09.578314512Z" level=info msg="TearDown network for sandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" successfully" May 17 00:25:09.582247 containerd[1474]: time="2025-05-17T00:25:09.582175523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:25:09.582405 containerd[1474]: time="2025-05-17T00:25:09.582251723Z" level=info msg="RemovePodSandbox \"834cbbe73363ce900c4c218831fedb98b22d54233d987aa5643980181fe720fb\" returns successfully"