May 17 00:06:35.903139 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:06:35.903164 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:06:35.903175 kernel: KASLR enabled May 17 00:06:35.903181 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:06:35.903188 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:06:35.903194 kernel: random: crng init done May 17 00:06:35.903201 kernel: ACPI: Early table checksum verification disabled May 17 00:06:35.903208 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:06:35.903214 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:06:35.903222 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903229 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903235 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903260 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903267 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903275 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903285 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903292 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903298 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:06:35.903305 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:06:35.903312 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:06:35.903319 kernel: NUMA: Failed to initialise from firmware May 17 00:06:35.903326 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:35.903333 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 17 00:06:35.903339 kernel: Zone ranges: May 17 00:06:35.903346 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:06:35.903354 kernel: DMA32 empty May 17 00:06:35.903361 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:06:35.903368 kernel: Movable zone start for each node May 17 00:06:35.903374 kernel: Early memory node ranges May 17 00:06:35.903381 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:06:35.903388 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:06:35.903395 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:06:35.903401 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:06:35.903408 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:06:35.903415 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:06:35.903421 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:06:35.903429 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:06:35.903437 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:06:35.903443 kernel: psci: probing for conduit method from ACPI. May 17 00:06:35.903450 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:06:35.903460 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:06:35.903467 kernel: psci: Trusted OS migration not required May 17 00:06:35.903474 kernel: psci: SMC Calling Convention v1.1 May 17 00:06:35.903484 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:06:35.903491 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:06:35.903498 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:06:35.903506 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:06:35.903513 kernel: Detected PIPT I-cache on CPU0 May 17 00:06:35.903520 kernel: CPU features: detected: GIC system register CPU interface May 17 00:06:35.903527 kernel: CPU features: detected: Hardware dirty bit management May 17 00:06:35.903534 kernel: CPU features: detected: Spectre-v4 May 17 00:06:35.903541 kernel: CPU features: detected: Spectre-BHB May 17 00:06:35.903549 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:06:35.903557 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:06:35.903565 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:06:35.903572 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:06:35.903579 kernel: alternatives: applying boot alternatives May 17 00:06:35.903587 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:35.903595 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:06:35.903602 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:06:35.903610 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:06:35.903617 kernel: Fallback order for Node 0: 0 May 17 00:06:35.903624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:06:35.903631 kernel: Policy zone: Normal May 17 00:06:35.903640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:06:35.903647 kernel: software IO TLB: area num 2. May 17 00:06:35.903655 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:06:35.903662 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) May 17 00:06:35.903670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:06:35.903677 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:06:35.903685 kernel: rcu: RCU event tracing is enabled. May 17 00:06:35.903692 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:06:35.903700 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:06:35.903707 kernel: Tracing variant of Tasks RCU enabled. May 17 00:06:35.903714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:06:35.903722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:06:35.903730 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:06:35.903737 kernel: GICv3: 256 SPIs implemented May 17 00:06:35.903744 kernel: GICv3: 0 Extended SPIs implemented May 17 00:06:35.903751 kernel: Root IRQ handler: gic_handle_irq May 17 00:06:35.903758 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:06:35.903766 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:06:35.903773 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:06:35.903780 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:06:35.903788 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:06:35.903795 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:06:35.903802 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:06:35.903811 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:06:35.903819 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:35.903826 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:06:35.903833 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:06:35.903841 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:06:35.903848 kernel: Console: colour dummy device 80x25 May 17 00:06:35.903855 kernel: ACPI: Core revision 20230628 May 17 00:06:35.903863 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:06:35.903871 kernel: pid_max: default: 32768 minimum: 301 May 17 00:06:35.903878 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:06:35.903887 kernel: landlock: Up and running. May 17 00:06:35.903895 kernel: SELinux: Initializing. May 17 00:06:35.903902 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:35.903910 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:06:35.903917 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:06:35.903925 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:35.903933 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:06:35.903940 kernel: rcu: Hierarchical SRCU implementation. May 17 00:06:35.903948 kernel: rcu: Max phase no-delay instances is 400. May 17 00:06:35.903956 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:06:35.903996 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:06:35.904004 kernel: Remapping and enabling EFI services. May 17 00:06:35.904012 kernel: smp: Bringing up secondary CPUs ... May 17 00:06:35.904020 kernel: Detected PIPT I-cache on CPU1 May 17 00:06:35.904034 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:06:35.904045 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:06:35.904056 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:06:35.904063 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:06:35.904073 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:06:35.904081 kernel: SMP: Total of 2 processors activated. May 17 00:06:35.904089 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:06:35.904102 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:06:35.904111 kernel: CPU features: detected: Common not Private translations May 17 00:06:35.904119 kernel: CPU features: detected: CRC32 instructions May 17 00:06:35.904127 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:06:35.904135 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:06:35.904142 kernel: CPU features: detected: LSE atomic instructions May 17 00:06:35.904150 kernel: CPU features: detected: Privileged Access Never May 17 00:06:35.904159 kernel: CPU features: detected: RAS Extension Support May 17 00:06:35.904169 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:06:35.904176 kernel: CPU: All CPU(s) started at EL1 May 17 00:06:35.904184 kernel: alternatives: applying system-wide alternatives May 17 00:06:35.904192 kernel: devtmpfs: initialized May 17 00:06:35.904200 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:06:35.904208 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:06:35.904217 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:06:35.904225 kernel: SMBIOS 3.0.0 present. May 17 00:06:35.904233 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:06:35.904345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:06:35.904356 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:06:35.904365 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:06:35.904373 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:06:35.904380 kernel: audit: initializing netlink subsys (disabled) May 17 00:06:35.904388 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 May 17 00:06:35.904401 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:06:35.904409 kernel: cpuidle: using governor menu May 17 00:06:35.904417 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:06:35.904424 kernel: ASID allocator initialised with 32768 entries May 17 00:06:35.904432 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:06:35.904440 kernel: Serial: AMBA PL011 UART driver May 17 00:06:35.904448 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:06:35.904456 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:06:35.904464 kernel: Modules: 509024 pages in range for PLT usage May 17 00:06:35.904474 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:06:35.904482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:06:35.904490 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:06:35.904497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:06:35.904505 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:06:35.904513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:06:35.904521 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:06:35.904529 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:06:35.904537 kernel: ACPI: Added _OSI(Module Device) May 17 00:06:35.904546 kernel: ACPI: Added _OSI(Processor Device) May 17 00:06:35.904554 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:06:35.904562 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:06:35.904570 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:06:35.904579 kernel: ACPI: Interpreter enabled May 17 00:06:35.904587 kernel: ACPI: Using GIC for interrupt routing May 17 00:06:35.904595 kernel: ACPI: MCFG table detected, 1 entries May 17 00:06:35.904603 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:06:35.904611 kernel: printk: console [ttyAMA0] enabled May 17 00:06:35.904620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:06:35.904784 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:06:35.904872 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:06:35.904943 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:06:35.905025 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:06:35.905093 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:06:35.905104 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:06:35.905115 kernel: PCI host bridge to bus 0000:00 May 17 00:06:35.905202 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:06:35.905298 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:06:35.905374 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:06:35.905446 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:06:35.905535 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:06:35.905622 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:06:35.905699 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:06:35.905769 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:35.905850 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.905937 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:06:35.906063 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.906140 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:06:35.906223 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.908370 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:06:35.908485 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.908558 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:06:35.908638 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.908710 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:06:35.908797 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.908866 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:06:35.908950 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.909048 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:06:35.909143 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.909226 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:06:35.909345 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:06:35.909417 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:06:35.909493 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:06:35.909562 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:06:35.909646 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:35.909729 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:06:35.909812 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:35.909902 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:35.910015 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:06:35.910090 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:06:35.910182 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:06:35.911485 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:06:35.911607 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:06:35.911701 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:06:35.911773 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:06:35.911853 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:06:35.911929 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:06:35.912022 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:06:35.912110 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:06:35.912181 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:06:35.913886 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:35.914053 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:06:35.914145 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:06:35.914228 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:06:35.914322 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:06:35.914397 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:06:35.914479 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:35.914547 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:06:35.914619 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:06:35.914687 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:06:35.914756 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:06:35.914828 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:06:35.914897 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:35.914980 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:06:35.915059 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:06:35.915129 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:06:35.915196 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:06:35.915303 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:06:35.915382 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:35.915456 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:06:35.915530 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:06:35.915668 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:35.915748 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:06:35.915822 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:06:35.915892 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:35.915969 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:06:35.916057 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:06:35.916129 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:35.916202 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:06:35.916310 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:06:35.916384 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:35.916453 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:06:35.916524 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:06:35.916594 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:35.916667 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:06:35.916793 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:35.916895 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:06:35.917007 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:35.917089 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:06:35.917161 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:35.917233 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:06:35.919890 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:35.920039 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:06:35.920123 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:35.920198 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:06:35.920345 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:35.920425 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:06:35.920495 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:35.920569 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:06:35.920647 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:35.920723 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:06:35.920792 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:06:35.920867 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:06:35.920935 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:06:35.921026 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:06:35.921101 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:06:35.921178 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:06:35.921260 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:06:35.921336 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:06:35.921429 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:06:35.921503 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:06:35.921573 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:06:35.921643 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:06:35.921712 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:06:35.921784 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:06:35.921856 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:06:35.921928 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:06:35.922014 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:06:35.922086 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:06:35.922155 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:06:35.922231 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:06:35.922386 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:06:35.922466 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:06:35.922545 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:06:35.922615 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:06:35.922685 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:06:35.922754 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:06:35.922823 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:35.922902 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:06:35.923025 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:06:35.923107 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:06:35.923176 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:06:35.923262 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:35.923359 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:06:35.923432 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:06:35.923509 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:06:35.923578 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:06:35.923646 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:06:35.923714 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:35.923792 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:06:35.923863 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:06:35.923939 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:06:35.924022 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:06:35.924097 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:35.924174 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:06:35.924327 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:06:35.924410 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:06:35.924478 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:06:35.924545 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:06:35.924610 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:35.924685 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:06:35.924761 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:06:35.924829 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:06:35.924896 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:06:35.924997 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:06:35.925083 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:35.925162 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:06:35.925233 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:06:35.926483 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:06:35.926565 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:06:35.926635 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:06:35.926704 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:06:35.926775 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:35.926846 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:06:35.926914 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:06:35.927028 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:06:35.927108 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:35.927180 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:06:35.927262 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:06:35.927341 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:06:35.927411 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:35.927492 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:06:35.927561 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:06:35.927630 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:06:35.927721 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:06:35.927795 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:06:35.927866 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:06:35.927939 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:06:35.928016 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:06:35.928080 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:06:35.928153 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:06:35.928223 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:06:35.930451 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:06:35.930536 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:06:35.930600 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:06:35.930663 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:06:35.930735 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:06:35.930804 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:06:35.930866 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:06:35.930936 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:06:35.931021 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:06:35.931094 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:06:35.931184 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:06:35.931262 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:06:35.932496 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:06:35.932584 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:06:35.932657 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:06:35.932720 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:06:35.932793 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:06:35.932855 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:06:35.932917 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:06:35.932928 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:06:35.932937 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:06:35.932946 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:06:35.932954 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:06:35.932981 kernel: iommu: Default domain type: Translated May 17 00:06:35.932994 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:06:35.933002 kernel: efivars: Registered efivars operations May 17 00:06:35.933011 kernel: vgaarb: loaded May 17 00:06:35.933019 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:06:35.933027 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:06:35.933036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:06:35.933044 kernel: pnp: PnP ACPI init May 17 00:06:35.933131 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:06:35.933146 kernel: pnp: PnP ACPI: found 1 devices May 17 00:06:35.933155 kernel: NET: Registered PF_INET protocol family May 17 00:06:35.933163 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:06:35.933172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:06:35.933180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:06:35.933189 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:06:35.933197 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:06:35.933206 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:06:35.933214 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:35.933224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:06:35.933233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:06:35.934361 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:06:35.934386 kernel: PCI: CLS 0 bytes, default 64 May 17 00:06:35.934396 kernel: kvm [1]: HYP mode not available May 17 00:06:35.934406 kernel: Initialise system trusted keyrings May 17 00:06:35.934416 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:06:35.934426 kernel: Key type asymmetric registered May 17 00:06:35.934436 kernel: Asymmetric key parser 'x509' registered May 17 00:06:35.934452 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:06:35.934461 kernel: io scheduler mq-deadline registered May 17 00:06:35.934469 kernel: io scheduler kyber registered May 17 00:06:35.934477 kernel: io scheduler bfq registered May 17 00:06:35.934487 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:06:35.934567 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:06:35.934642 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:06:35.934712 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.934788 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:06:35.934858 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:06:35.934930 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.935027 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:06:35.935100 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:06:35.935170 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.935314 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:06:35.935393 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:06:35.935470 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.935543 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:06:35.935612 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:06:35.935680 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.935758 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:06:35.935828 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:06:35.935907 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.935996 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:06:35.936069 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:06:35.936139 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.936215 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:06:35.936312 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:06:35.936386 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.936398 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:06:35.936469 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:06:35.936543 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:06:35.936616 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:06:35.936628 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:06:35.936637 kernel: ACPI: button: Power Button [PWRB] May 17 00:06:35.936645 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:06:35.936720 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:06:35.936798 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:06:35.936810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:06:35.936819 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:06:35.936893 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:06:35.936904 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:06:35.936914 kernel: thunder_xcv, ver 1.0 May 17 00:06:35.936922 kernel: thunder_bgx, ver 1.0 May 17 00:06:35.936930 kernel: nicpf, ver 1.0 May 17 00:06:35.936938 kernel: nicvf, ver 1.0 May 17 00:06:35.937065 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:06:35.937140 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:06:35 UTC (1747440395) May 17 00:06:35.937156 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:06:35.937164 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:06:35.937173 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:06:35.937182 kernel: watchdog: Hard watchdog permanently disabled May 17 00:06:35.937190 kernel: NET: Registered PF_INET6 protocol family May 17 00:06:35.937198 kernel: Segment Routing with IPv6 May 17 00:06:35.937206 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:06:35.937215 kernel: NET: Registered PF_PACKET protocol family May 17 00:06:35.937223 kernel: Key type dns_resolver registered May 17 00:06:35.937233 kernel: registered taskstats version 1 May 17 00:06:35.937278 kernel: Loading compiled-in X.509 certificates May 17 00:06:35.937287 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:06:35.937295 kernel: Key type .fscrypt registered May 17 00:06:35.937303 kernel: Key type fscrypt-provisioning registered May 17 00:06:35.937312 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:06:35.937320 kernel: ima: Allocated hash algorithm: sha1 May 17 00:06:35.937328 kernel: ima: No architecture policies found May 17 00:06:35.937337 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:06:35.937348 kernel: clk: Disabling unused clocks May 17 00:06:35.937356 kernel: Freeing unused kernel memory: 39424K May 17 00:06:35.937364 kernel: Run /init as init process May 17 00:06:35.937372 kernel: with arguments: May 17 00:06:35.937381 kernel: /init May 17 00:06:35.937389 kernel: with environment: May 17 00:06:35.937396 kernel: HOME=/ May 17 00:06:35.937405 kernel: TERM=linux May 17 00:06:35.937413 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:06:35.937425 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:35.937436 systemd[1]: Detected virtualization kvm. May 17 00:06:35.937444 systemd[1]: Detected architecture arm64. May 17 00:06:35.937453 systemd[1]: Running in initrd. May 17 00:06:35.937461 systemd[1]: No hostname configured, using default hostname. May 17 00:06:35.937469 systemd[1]: Hostname set to . May 17 00:06:35.937478 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:35.937489 systemd[1]: Queued start job for default target initrd.target. May 17 00:06:35.937498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:35.937507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:35.937516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:06:35.937525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:35.937535 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:06:35.937544 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:06:35.937556 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:06:35.937566 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:06:35.937575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:35.937584 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:35.937593 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:35.937601 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:35.937610 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:35.937619 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:35.937629 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:35.937640 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:35.937649 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:06:35.937658 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:06:35.937666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:35.937675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:35.937684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:35.937693 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:35.937701 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:06:35.937712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:35.937720 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:06:35.937729 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:06:35.937738 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:35.937747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:35.937756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:35.937765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:06:35.937773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:35.937806 systemd-journald[236]: Collecting audit messages is disabled. May 17 00:06:35.937830 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:06:35.937843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:35.937852 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:06:35.937860 kernel: Bridge firewalling registered May 17 00:06:35.937869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:35.937878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:35.937887 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:35.937897 systemd-journald[236]: Journal started May 17 00:06:35.937918 systemd-journald[236]: Runtime Journal (/run/log/journal/24b53cfe1c5c44ab99ae82ef7a50e2de) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:35.901297 systemd-modules-load[237]: Inserted module 'overlay' May 17 00:06:35.942255 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:35.917313 systemd-modules-load[237]: Inserted module 'br_netfilter' May 17 00:06:35.944418 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:35.946598 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:35.952400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:35.959812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:35.964282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:35.972629 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:35.975675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:35.983530 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:06:35.984592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:35.992531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:36.012214 dracut-cmdline[272]: dracut-dracut-053 May 17 00:06:36.018775 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:06:36.031608 systemd-resolved[274]: Positive Trust Anchors: May 17 00:06:36.031629 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:36.031661 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:36.037161 systemd-resolved[274]: Defaulting to hostname 'linux'. May 17 00:06:36.039368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:36.042008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:36.123289 kernel: SCSI subsystem initialized May 17 00:06:36.127279 kernel: Loading iSCSI transport class v2.0-870. May 17 00:06:36.136295 kernel: iscsi: registered transport (tcp) May 17 00:06:36.149539 kernel: iscsi: registered transport (qla4xxx) May 17 00:06:36.149639 kernel: QLogic iSCSI HBA Driver May 17 00:06:36.200120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:06:36.206500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:06:36.227313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:06:36.227430 kernel: device-mapper: uevent: version 1.0.3 May 17 00:06:36.227471 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:06:36.276322 kernel: raid6: neonx8 gen() 15659 MB/s May 17 00:06:36.293313 kernel: raid6: neonx4 gen() 15597 MB/s May 17 00:06:36.310294 kernel: raid6: neonx2 gen() 13187 MB/s May 17 00:06:36.327296 kernel: raid6: neonx1 gen() 10406 MB/s May 17 00:06:36.345057 kernel: raid6: int64x8 gen() 6921 MB/s May 17 00:06:36.361447 kernel: raid6: int64x4 gen() 7313 MB/s May 17 00:06:36.378306 kernel: raid6: int64x2 gen() 6082 MB/s May 17 00:06:36.395300 kernel: raid6: int64x1 gen() 5036 MB/s May 17 00:06:36.395384 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s May 17 00:06:36.412319 kernel: raid6: .... xor() 11840 MB/s, rmw enabled May 17 00:06:36.412396 kernel: raid6: using neon recovery algorithm May 17 00:06:36.417558 kernel: xor: measuring software checksum speed May 17 00:06:36.417649 kernel: 8regs : 19797 MB/sec May 17 00:06:36.417685 kernel: 32regs : 19636 MB/sec May 17 00:06:36.417706 kernel: arm64_neon : 26813 MB/sec May 17 00:06:36.418370 kernel: xor: using function: arm64_neon (26813 MB/sec) May 17 00:06:36.468373 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:06:36.486440 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:36.493633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:36.507186 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 17 00:06:36.510843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:36.523481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:06:36.538052 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 17 00:06:36.573604 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:36.580508 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:36.628309 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:36.637264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:06:36.659902 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:06:36.662061 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:36.664436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:36.666227 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:36.674805 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:06:36.698891 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:36.749669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:36.749798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:36.765398 kernel: scsi host0: Virtio SCSI HBA May 17 00:06:36.763668 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:36.767818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:36.769721 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:06:36.769773 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:06:36.768002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:36.771036 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:36.776687 kernel: ACPI: bus type USB registered May 17 00:06:36.776712 kernel: usbcore: registered new interface driver usbfs May 17 00:06:36.778276 kernel: usbcore: registered new interface driver hub May 17 00:06:36.780319 kernel: usbcore: registered new device driver usb May 17 00:06:36.780646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:36.796675 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:06:36.798670 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:06:36.802663 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:06:36.804292 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:06:36.811850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:36.825781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:06:36.831265 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:36.831501 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:06:36.831594 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:06:36.838041 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:06:36.838400 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:06:36.840707 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:06:36.842935 kernel: hub 1-0:1.0: USB hub found May 17 00:06:36.843340 kernel: hub 1-0:1.0: 4 ports detected May 17 00:06:36.844762 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:06:36.845264 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:06:36.846490 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:06:36.846648 kernel: hub 2-0:1.0: USB hub found May 17 00:06:36.846751 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:06:36.846845 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:06:36.846931 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:06:36.848648 kernel: hub 2-0:1.0: 4 ports detected May 17 00:06:36.852417 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:06:36.852471 kernel: GPT:17805311 != 80003071 May 17 00:06:36.852483 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:06:36.852493 kernel: GPT:17805311 != 80003071 May 17 00:06:36.852503 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:06:36.853261 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:36.853624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:36.856263 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:06:36.900805 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (511) May 17 00:06:36.910843 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:06:36.915693 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (503) May 17 00:06:36.920914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:06:36.927570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:36.940371 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:06:36.942181 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:06:36.947485 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:06:36.967422 disk-uuid[578]: Primary Header is updated. May 17 00:06:36.967422 disk-uuid[578]: Secondary Entries is updated. May 17 00:06:36.967422 disk-uuid[578]: Secondary Header is updated. May 17 00:06:36.975261 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:36.979348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:36.983272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:37.081997 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:06:37.219352 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:06:37.219420 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:06:37.221052 kernel: usbcore: registered new interface driver usbhid May 17 00:06:37.221102 kernel: usbhid: USB HID core driver May 17 00:06:37.324293 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:06:37.454288 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:06:37.509010 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:06:37.990276 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:06:37.991641 disk-uuid[579]: The operation has completed successfully. May 17 00:06:38.047100 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:06:38.047222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:06:38.064550 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:06:38.070034 sh[596]: Success May 17 00:06:38.083334 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:06:38.138979 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:06:38.147441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:06:38.152036 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:06:38.172674 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:06:38.172750 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:38.172776 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:06:38.173291 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:06:38.174253 kernel: BTRFS info (device dm-0): using free space tree May 17 00:06:38.181301 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:06:38.183499 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:06:38.184814 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:06:38.189601 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:06:38.194453 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:06:38.210228 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:38.210338 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:38.210353 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:38.216281 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:38.216395 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:38.229538 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:06:38.231033 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:38.237699 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:06:38.248601 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:06:38.328340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:38.335579 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:38.358976 ignition[692]: Ignition 2.19.0 May 17 00:06:38.359615 systemd-networkd[783]: lo: Link UP May 17 00:06:38.359619 systemd-networkd[783]: lo: Gained carrier May 17 00:06:38.360749 ignition[692]: Stage: fetch-offline May 17 00:06:38.361205 systemd-networkd[783]: Enumeration completed May 17 00:06:38.360806 ignition[692]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:38.361326 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:38.360815 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:38.362567 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:38.361036 ignition[692]: parsed url from cmdline: "" May 17 00:06:38.362571 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:38.361040 ignition[692]: no config URL provided May 17 00:06:38.363594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:38.361046 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:38.364445 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:38.361056 ignition[692]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:38.364448 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:38.361062 ignition[692]: failed to fetch config: resource requires networking May 17 00:06:38.365614 systemd[1]: Reached target network.target - Network. May 17 00:06:38.361320 ignition[692]: Ignition finished successfully May 17 00:06:38.365797 systemd-networkd[783]: eth0: Link UP May 17 00:06:38.365800 systemd-networkd[783]: eth0: Gained carrier May 17 00:06:38.365810 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:38.372624 systemd-networkd[783]: eth1: Link UP May 17 00:06:38.372629 systemd-networkd[783]: eth1: Gained carrier May 17 00:06:38.372639 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:38.376428 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:06:38.390147 ignition[788]: Ignition 2.19.0 May 17 00:06:38.390162 ignition[788]: Stage: fetch May 17 00:06:38.390381 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:38.390392 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:38.390494 ignition[788]: parsed url from cmdline: "" May 17 00:06:38.390500 ignition[788]: no config URL provided May 17 00:06:38.390505 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:06:38.390524 ignition[788]: no config at "/usr/lib/ignition/user.ign" May 17 00:06:38.390545 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:06:38.391185 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:06:38.405335 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:38.433401 systemd-networkd[783]: eth0: DHCPv4 address 49.12.39.64/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:38.591430 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:06:38.597655 ignition[788]: GET result: OK May 17 00:06:38.597819 ignition[788]: parsing config with SHA512: 2265916f137457f97e16a2e96c5cc32d4fc104a3ac35240c12d237692c1291c13d9a00bb3035f7c08af15c3b6d62b8bbb576487a6b206a60f16489884d98f178 May 17 00:06:38.603737 unknown[788]: fetched base config from "system" May 17 00:06:38.603746 unknown[788]: fetched base config from "system" May 17 00:06:38.604190 ignition[788]: fetch: fetch complete May 17 00:06:38.603752 unknown[788]: fetched user config from "hetzner" May 17 00:06:38.604196 ignition[788]: fetch: fetch passed May 17 00:06:38.606157 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:06:38.604239 ignition[788]: Ignition finished successfully May 17 00:06:38.611453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:06:38.627914 ignition[795]: Ignition 2.19.0 May 17 00:06:38.627928 ignition[795]: Stage: kargs May 17 00:06:38.628182 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:38.628196 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:38.629619 ignition[795]: kargs: kargs passed May 17 00:06:38.629688 ignition[795]: Ignition finished successfully May 17 00:06:38.632917 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:06:38.640513 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:06:38.654883 ignition[801]: Ignition 2.19.0 May 17 00:06:38.654904 ignition[801]: Stage: disks May 17 00:06:38.655119 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 17 00:06:38.655129 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:38.658401 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:06:38.656230 ignition[801]: disks: disks passed May 17 00:06:38.656316 ignition[801]: Ignition finished successfully May 17 00:06:38.660585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:06:38.661736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:06:38.663731 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:38.665882 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:38.667831 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:38.681621 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:06:38.702843 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:06:38.707097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:06:38.712396 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:06:38.772286 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:06:38.773232 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:06:38.774637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:06:38.784453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:38.788400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:06:38.792520 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:06:38.793468 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:06:38.793503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:38.804814 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) May 17 00:06:38.804868 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:38.804888 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:38.804913 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:38.810901 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:06:38.818423 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:06:38.820271 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:38.820380 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:38.823789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:38.876964 coreos-metadata[819]: May 17 00:06:38.876 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:06:38.880311 coreos-metadata[819]: May 17 00:06:38.880 INFO Fetch successful May 17 00:06:38.884289 coreos-metadata[819]: May 17 00:06:38.881 INFO wrote hostname ci-4081-3-3-n-bb4b333066 to /sysroot/etc/hostname May 17 00:06:38.888278 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:06:38.889899 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:38.898375 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory May 17 00:06:38.904461 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:06:38.911418 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:06:39.051904 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:06:39.060520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:06:39.064458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:06:39.077369 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:39.103301 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:06:39.107883 ignition[934]: INFO : Ignition 2.19.0 May 17 00:06:39.108795 ignition[934]: INFO : Stage: mount May 17 00:06:39.109324 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:39.109324 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:39.110752 ignition[934]: INFO : mount: mount passed May 17 00:06:39.110752 ignition[934]: INFO : Ignition finished successfully May 17 00:06:39.113453 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:06:39.120526 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:06:39.172711 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:06:39.192529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:06:39.202290 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (947) May 17 00:06:39.204829 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:06:39.204930 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:06:39.204977 kernel: BTRFS info (device sda6): using free space tree May 17 00:06:39.210608 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:06:39.210695 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:06:39.213651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:06:39.247353 ignition[964]: INFO : Ignition 2.19.0 May 17 00:06:39.249264 ignition[964]: INFO : Stage: files May 17 00:06:39.249264 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:39.249264 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:39.251210 ignition[964]: DEBUG : files: compiled without relabeling support, skipping May 17 00:06:39.253059 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:06:39.254023 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:06:39.259567 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:06:39.260849 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:06:39.263440 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:06:39.262734 unknown[964]: wrote ssh authorized keys file for user: core May 17 00:06:39.267234 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:06:39.267234 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:06:39.267234 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:06:39.267234 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:06:40.047731 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:06:40.262443 systemd-networkd[783]: eth0: Gained IPv6LL May 17 00:06:40.390643 systemd-networkd[783]: eth1: Gained IPv6LL May 17 00:06:41.499025 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:06:41.500959 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:06:41.500959 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:06:42.146283 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 17 00:06:42.382515 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:06:42.382515 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:42.384768 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:06:43.050137 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 17 00:06:44.173697 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:06:44.173697 ignition[964]: INFO : files: op(d): [started] processing unit "containerd.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:06:44.176167 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:06:44.176167 ignition[964]: INFO : files: op(d): [finished] processing unit "containerd.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:44.176167 ignition[964]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:06:44.176167 ignition[964]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 17 00:06:44.176167 ignition[964]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:06:44.191458 ignition[964]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:44.191458 ignition[964]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:06:44.191458 ignition[964]: INFO : files: files passed May 17 00:06:44.191458 ignition[964]: INFO : Ignition finished successfully May 17 00:06:44.181276 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:06:44.187530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:06:44.192956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:06:44.198579 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:06:44.198716 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:06:44.220796 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:44.220796 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:44.223854 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:06:44.226482 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:44.227568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:06:44.233600 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:06:44.268093 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:06:44.268323 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:06:44.272918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:06:44.273733 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:06:44.274986 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:06:44.276535 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:06:44.300305 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:44.305440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:06:44.319006 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:44.319844 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:44.321128 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:06:44.322279 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:06:44.322410 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:06:44.323884 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:06:44.324610 systemd[1]: Stopped target basic.target - Basic System. May 17 00:06:44.325825 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:06:44.326940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:06:44.328053 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:06:44.329238 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:06:44.330452 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:06:44.331751 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:06:44.332899 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:06:44.334117 systemd[1]: Stopped target swap.target - Swaps. May 17 00:06:44.335078 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:06:44.335204 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:06:44.336569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:44.337279 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:44.338366 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:06:44.340282 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:44.341422 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:06:44.341540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:06:44.343323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:06:44.343443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:06:44.344831 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:06:44.344956 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:06:44.345973 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:06:44.346066 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:06:44.359509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:06:44.360100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:06:44.360262 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:44.364634 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:06:44.367799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:06:44.368136 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:44.374866 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:06:44.375064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:06:44.382850 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:06:44.382963 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:06:44.391384 ignition[1016]: INFO : Ignition 2.19.0 May 17 00:06:44.395541 ignition[1016]: INFO : Stage: umount May 17 00:06:44.395541 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:06:44.395541 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:06:44.395541 ignition[1016]: INFO : umount: umount passed May 17 00:06:44.395541 ignition[1016]: INFO : Ignition finished successfully May 17 00:06:44.395049 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:06:44.399368 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:06:44.399500 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:06:44.402506 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:06:44.402636 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:06:44.404180 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:06:44.404322 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:06:44.405448 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:06:44.405514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:06:44.406945 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:06:44.407013 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:06:44.407887 systemd[1]: Stopped target network.target - Network. May 17 00:06:44.408717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:06:44.408768 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:06:44.409841 systemd[1]: Stopped target paths.target - Path Units. May 17 00:06:44.410704 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:06:44.414602 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:44.415390 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:06:44.416596 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:06:44.418190 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:06:44.418311 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:06:44.419476 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:06:44.419517 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:06:44.420379 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:06:44.420432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:06:44.421456 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:06:44.421498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:06:44.422461 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:06:44.422508 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:06:44.423675 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:06:44.424522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:06:44.435332 systemd-networkd[783]: eth0: DHCPv6 lease lost May 17 00:06:44.436331 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:06:44.436550 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:06:44.440556 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:06:44.440635 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:44.441340 systemd-networkd[783]: eth1: DHCPv6 lease lost May 17 00:06:44.443431 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:06:44.444300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:06:44.446429 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:06:44.446704 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:44.453429 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:06:44.454942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:06:44.455027 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:06:44.458507 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:44.458573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:44.459589 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:06:44.459669 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:06:44.462106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:44.479344 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:06:44.479690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:44.486835 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:06:44.487028 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:06:44.494103 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:06:44.494175 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:06:44.495409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:06:44.495444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:44.496422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:06:44.496468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:06:44.498793 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:06:44.498841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:06:44.500771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:06:44.500822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:06:44.508540 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:06:44.509194 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:06:44.509283 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:44.512747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:44.512805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:44.528141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:06:44.528402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:06:44.532189 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:06:44.537419 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:06:44.551394 systemd[1]: Switching root. May 17 00:06:44.589982 systemd-journald[236]: Journal stopped May 17 00:06:45.641513 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 17 00:06:45.641604 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:06:45.641617 kernel: SELinux: policy capability open_perms=1 May 17 00:06:45.641631 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:06:45.641640 kernel: SELinux: policy capability always_check_network=0 May 17 00:06:45.641650 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:06:45.641659 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:06:45.641669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:06:45.641678 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:06:45.641687 kernel: audit: type=1403 audit(1747440404.860:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:06:45.641703 systemd[1]: Successfully loaded SELinux policy in 36.368ms. May 17 00:06:45.641724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.281ms. May 17 00:06:45.641737 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:06:45.641752 systemd[1]: Detected virtualization kvm. May 17 00:06:45.641762 systemd[1]: Detected architecture arm64. May 17 00:06:45.641772 systemd[1]: Detected first boot. May 17 00:06:45.641783 systemd[1]: Hostname set to . May 17 00:06:45.641794 systemd[1]: Initializing machine ID from VM UUID. May 17 00:06:45.641806 zram_generator::config[1077]: No configuration found. May 17 00:06:45.641818 systemd[1]: Populated /etc with preset unit settings. May 17 00:06:45.641831 systemd[1]: Queued start job for default target multi-user.target. May 17 00:06:45.641843 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:06:45.641854 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:06:45.641864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:06:45.641874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:06:45.641885 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:06:45.641906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:06:45.641925 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:06:45.641937 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:06:45.641948 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:06:45.641958 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:06:45.641969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:06:45.641980 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:06:45.641995 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:06:45.642005 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:06:45.642015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:06:45.642026 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:06:45.642038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:06:45.642049 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:06:45.642059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:06:45.642071 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:06:45.642081 systemd[1]: Reached target slices.target - Slice Units. May 17 00:06:45.642092 systemd[1]: Reached target swap.target - Swaps. May 17 00:06:45.642102 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:06:45.642115 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:06:45.642126 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:06:45.642137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:06:45.642148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:06:45.642159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:06:45.642169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:06:45.642181 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:06:45.642194 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:06:45.642206 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:06:45.642217 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:06:45.642227 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:06:45.642340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:06:45.642356 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:06:45.642366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:06:45.642378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:45.642392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:06:45.642403 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:06:45.642413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:45.642423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:45.642435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:45.642445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:06:45.642456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:45.642468 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:06:45.642481 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:06:45.642497 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:06:45.642508 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:06:45.642519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:06:45.642532 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:06:45.642543 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:06:45.642553 kernel: fuse: init (API version 7.39) May 17 00:06:45.642563 kernel: loop: module loaded May 17 00:06:45.642573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:06:45.642585 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:06:45.642596 kernel: ACPI: bus type drm_connector registered May 17 00:06:45.642606 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:06:45.642616 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:06:45.642626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:06:45.642637 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:06:45.642648 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:06:45.642658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:06:45.642669 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:06:45.642686 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:06:45.642698 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:06:45.642709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:45.642719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:45.642731 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:45.643317 systemd-journald[1157]: Collecting audit messages is disabled. May 17 00:06:45.643365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:45.643381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:45.643401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:45.643413 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:06:45.643425 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:06:45.643450 systemd-journald[1157]: Journal started May 17 00:06:45.643476 systemd-journald[1157]: Runtime Journal (/run/log/journal/24b53cfe1c5c44ab99ae82ef7a50e2de) is 8.0M, max 76.6M, 68.6M free. May 17 00:06:45.645318 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:06:45.645830 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:45.648697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:45.651344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:06:45.652360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:06:45.653639 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:06:45.665577 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:06:45.672482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:06:45.688479 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:06:45.691354 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:06:45.694761 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:06:45.713607 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:06:45.715558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:45.718714 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:06:45.721572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:45.728546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:45.734384 systemd-journald[1157]: Time spent on flushing to /var/log/journal/24b53cfe1c5c44ab99ae82ef7a50e2de is 63.057ms for 1117 entries. May 17 00:06:45.734384 systemd-journald[1157]: System Journal (/var/log/journal/24b53cfe1c5c44ab99ae82ef7a50e2de) is 8.0M, max 584.8M, 576.8M free. May 17 00:06:45.810866 systemd-journald[1157]: Received client request to flush runtime journal. May 17 00:06:45.736017 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:06:45.740219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:06:45.742555 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:06:45.748501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:06:45.757458 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:06:45.758802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:06:45.762878 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:06:45.788580 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:45.793830 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:06:45.815958 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:06:45.820172 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 17 00:06:45.820675 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 17 00:06:45.829656 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:06:45.838601 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:06:45.875934 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:06:45.885553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:06:45.901740 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. May 17 00:06:45.902153 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. May 17 00:06:45.909020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:06:46.315137 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:06:46.322760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:06:46.354356 systemd-udevd[1240]: Using default interface naming scheme 'v255'. May 17 00:06:46.378300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:06:46.393474 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:06:46.406414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:06:46.486094 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:06:46.517582 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 17 00:06:46.569276 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:06:46.614092 systemd-networkd[1246]: lo: Link UP May 17 00:06:46.614100 systemd-networkd[1246]: lo: Gained carrier May 17 00:06:46.616490 systemd-networkd[1246]: Enumeration completed May 17 00:06:46.616665 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:06:46.617201 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:46.617205 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:46.618412 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:46.618431 systemd-networkd[1246]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:06:46.619563 systemd-networkd[1246]: eth0: Link UP May 17 00:06:46.619574 systemd-networkd[1246]: eth0: Gained carrier May 17 00:06:46.619588 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:46.623743 systemd-networkd[1246]: eth1: Link UP May 17 00:06:46.623755 systemd-networkd[1246]: eth1: Gained carrier May 17 00:06:46.623776 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:46.629061 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:06:46.635586 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:06:46.651358 systemd-networkd[1246]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:06:46.687312 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1241) May 17 00:06:46.693961 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:06:46.694035 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:06:46.694048 kernel: [drm] features: -context_init May 17 00:06:46.698574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:46.704804 systemd-networkd[1246]: eth0: DHCPv4 address 49.12.39.64/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:06:46.720276 kernel: [drm] number of scanouts: 1 May 17 00:06:46.725127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:46.738846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:46.744391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:46.745118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:06:46.745190 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:06:46.745614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:46.745821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:46.756267 kernel: [drm] number of cap sets: 0 May 17 00:06:46.756341 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:06:46.764259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:46.766526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:46.781792 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:46.782128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:46.789001 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:06:46.803470 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:06:46.813225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:06:46.817061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:46.817287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:46.833030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:46.839269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:06:46.839521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:46.847537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:06:46.912731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:06:46.962685 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:06:46.972515 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:06:46.987344 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:47.017702 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:06:47.018632 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:06:47.025487 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:06:47.030345 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:06:47.058453 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:06:47.060674 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:06:47.062389 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:06:47.062443 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:06:47.063703 systemd[1]: Reached target machines.target - Containers. May 17 00:06:47.067036 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:06:47.074514 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:06:47.077475 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:06:47.078407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:47.080474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:06:47.087546 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:06:47.099645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:06:47.104445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:06:47.123328 kernel: loop0: detected capacity change from 0 to 114432 May 17 00:06:47.126769 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:06:47.138616 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:06:47.145262 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:06:47.154501 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:06:47.174325 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:06:47.208414 kernel: loop2: detected capacity change from 0 to 114328 May 17 00:06:47.255108 kernel: loop3: detected capacity change from 0 to 8 May 17 00:06:47.284396 kernel: loop4: detected capacity change from 0 to 114432 May 17 00:06:47.297950 kernel: loop5: detected capacity change from 0 to 203944 May 17 00:06:47.315403 kernel: loop6: detected capacity change from 0 to 114328 May 17 00:06:47.328310 kernel: loop7: detected capacity change from 0 to 8 May 17 00:06:47.329792 (sd-merge)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:06:47.331062 (sd-merge)[1337]: Merged extensions into '/usr'. May 17 00:06:47.337215 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:06:47.337233 systemd[1]: Reloading... May 17 00:06:47.409354 zram_generator::config[1365]: No configuration found. May 17 00:06:47.544663 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:06:47.572711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:47.633092 systemd[1]: Reloading finished in 295 ms. May 17 00:06:47.654297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:06:47.655955 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:06:47.665465 systemd[1]: Starting ensure-sysext.service... May 17 00:06:47.669464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:06:47.676283 systemd[1]: Reloading requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... May 17 00:06:47.676430 systemd[1]: Reloading... May 17 00:06:47.704203 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:06:47.704488 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:06:47.705161 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:06:47.708055 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. May 17 00:06:47.708193 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. May 17 00:06:47.711433 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:47.711568 systemd-tmpfiles[1410]: Skipping /boot May 17 00:06:47.720210 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:06:47.720445 systemd-tmpfiles[1410]: Skipping /boot May 17 00:06:47.759271 zram_generator::config[1438]: No configuration found. May 17 00:06:47.876287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:47.937816 systemd[1]: Reloading finished in 261 ms. May 17 00:06:47.955864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:06:47.970597 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:06:47.976513 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:06:47.979965 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:06:47.987181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:06:47.994982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:06:48.002952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:48.011570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:48.018571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:48.026177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:48.029777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:48.033711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:48.033940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:48.039805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:48.040025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:48.052131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:48.065317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:48.070449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:48.074677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:48.075630 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:06:48.079968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:06:48.082954 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:48.083120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:48.089844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:48.090047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:48.095987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:48.097480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:48.108020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:06:48.110009 augenrules[1517]: No rules May 17 00:06:48.114766 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:06:48.122744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:06:48.130623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:06:48.136646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:06:48.142575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:06:48.145975 systemd-resolved[1487]: Positive Trust Anchors: May 17 00:06:48.147320 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:06:48.147360 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:06:48.147682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:06:48.151958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:06:48.153484 systemd-resolved[1487]: Using system hostname 'ci-4081-3-3-n-bb4b333066'. May 17 00:06:48.156769 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:06:48.161478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:06:48.167630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:06:48.169078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:06:48.169750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:06:48.172648 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:06:48.172832 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:06:48.174088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:06:48.174448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:06:48.176779 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:06:48.177104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:06:48.187211 systemd[1]: Finished ensure-sysext.service. May 17 00:06:48.188691 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:06:48.191700 systemd[1]: Reached target network.target - Network. May 17 00:06:48.192633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:06:48.193439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:06:48.193520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:06:48.199508 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:06:48.255711 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:06:48.258549 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:06:48.261775 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:06:48.263803 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:06:48.264683 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:06:48.265465 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:06:48.265500 systemd[1]: Reached target paths.target - Path Units. May 17 00:06:48.266075 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:06:48.266870 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:06:48.267652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:06:48.268368 systemd[1]: Reached target timers.target - Timer Units. May 17 00:06:48.268816 systemd-timesyncd[1551]: Contacted time server 188.68.34.173:123 (0.flatcar.pool.ntp.org). May 17 00:06:48.268904 systemd-timesyncd[1551]: Initial clock synchronization to Sat 2025-05-17 00:06:48.377037 UTC. May 17 00:06:48.269437 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:06:48.271762 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:06:48.273708 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:06:48.277182 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:06:48.278800 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:06:48.280100 systemd[1]: Reached target basic.target - Basic System. May 17 00:06:48.281084 systemd[1]: System is tainted: cgroupsv1 May 17 00:06:48.281238 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:48.281354 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:06:48.282832 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:06:48.287486 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:06:48.291572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:06:48.298408 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:06:48.305408 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:06:48.305992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:06:48.308769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:06:48.315114 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:06:48.324120 jq[1559]: false May 17 00:06:48.326844 coreos-metadata[1556]: May 17 00:06:48.323 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:06:48.326844 coreos-metadata[1556]: May 17 00:06:48.325 INFO Fetch successful May 17 00:06:48.326844 coreos-metadata[1556]: May 17 00:06:48.326 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:06:48.329511 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:06:48.332471 coreos-metadata[1556]: May 17 00:06:48.331 INFO Fetch successful May 17 00:06:48.352499 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:06:48.357108 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:06:48.363452 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:06:48.367008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:06:48.369119 dbus-daemon[1558]: [system] SELinux support is enabled May 17 00:06:48.377448 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:06:48.380465 extend-filesystems[1562]: Found loop4 May 17 00:06:48.380465 extend-filesystems[1562]: Found loop5 May 17 00:06:48.380465 extend-filesystems[1562]: Found loop6 May 17 00:06:48.380465 extend-filesystems[1562]: Found loop7 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda May 17 00:06:48.380465 extend-filesystems[1562]: Found sda1 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda2 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda3 May 17 00:06:48.380465 extend-filesystems[1562]: Found usr May 17 00:06:48.380465 extend-filesystems[1562]: Found sda4 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda6 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda7 May 17 00:06:48.380465 extend-filesystems[1562]: Found sda9 May 17 00:06:48.380465 extend-filesystems[1562]: Checking size of /dev/sda9 May 17 00:06:48.384915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:06:48.388599 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:06:48.417186 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:06:48.417478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:06:48.432755 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:06:48.433133 extend-filesystems[1562]: Resized partition /dev/sda9 May 17 00:06:48.433077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:06:48.434110 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:06:48.434421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:06:48.442261 extend-filesystems[1594]: resize2fs 1.47.1 (20-May-2024) May 17 00:06:48.443102 jq[1583]: true May 17 00:06:48.445260 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:06:48.468985 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:06:48.469054 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:06:48.472550 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:06:48.472590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:06:48.481760 (ntainerd)[1606]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:06:48.498070 tar[1592]: linux-arm64/helm May 17 00:06:48.506599 jq[1599]: true May 17 00:06:48.510019 update_engine[1581]: I20250517 00:06:48.508015 1581 main.cc:92] Flatcar Update Engine starting May 17 00:06:48.520951 systemd-networkd[1246]: eth1: Gained IPv6LL May 17 00:06:48.527798 update_engine[1581]: I20250517 00:06:48.523463 1581 update_check_scheduler.cc:74] Next update check in 3m28s May 17 00:06:48.523977 systemd[1]: Started update-engine.service - Update Engine. May 17 00:06:48.527087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:06:48.539706 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:06:48.547968 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:06:48.576838 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:06:48.599412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:48.609941 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1250) May 17 00:06:48.606042 systemd-logind[1575]: New seat seat0. May 17 00:06:48.614466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:06:48.616413 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:06:48.616430 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:06:48.616945 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:06:48.633513 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:06:48.650052 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:06:48.657161 extend-filesystems[1594]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:06:48.657161 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:06:48.657161 extend-filesystems[1594]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:06:48.650357 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:06:48.691179 extend-filesystems[1562]: Resized filesystem in /dev/sda9 May 17 00:06:48.691179 extend-filesystems[1562]: Found sr0 May 17 00:06:48.660358 systemd-networkd[1246]: eth0: Gained IPv6LL May 17 00:06:48.674697 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:06:48.696531 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:06:48.724080 bash[1654]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:48.727440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:06:48.745824 systemd[1]: Starting sshkeys.service... May 17 00:06:48.774783 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:06:48.786647 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:06:48.796832 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:06:48.835518 coreos-metadata[1664]: May 17 00:06:48.834 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:06:48.837130 coreos-metadata[1664]: May 17 00:06:48.836 INFO Fetch successful May 17 00:06:48.844927 unknown[1664]: wrote ssh authorized keys file for user: core May 17 00:06:48.901209 containerd[1606]: time="2025-05-17T00:06:48.900751680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:06:48.925360 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" May 17 00:06:48.924549 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:06:48.937114 systemd[1]: Finished sshkeys.service. May 17 00:06:48.945796 containerd[1606]: time="2025-05-17T00:06:48.945308200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.946869 containerd[1606]: time="2025-05-17T00:06:48.946817600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:48.946929 containerd[1606]: time="2025-05-17T00:06:48.946905880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:06:48.946964 containerd[1606]: time="2025-05-17T00:06:48.946930360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:06:48.947116 containerd[1606]: time="2025-05-17T00:06:48.947091040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:06:48.947143 containerd[1606]: time="2025-05-17T00:06:48.947115480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.947208 containerd[1606]: time="2025-05-17T00:06:48.947184920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:48.947208 containerd[1606]: time="2025-05-17T00:06:48.947203720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.947486 containerd[1606]: time="2025-05-17T00:06:48.947460200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:48.947486 containerd[1606]: time="2025-05-17T00:06:48.947484600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.947537 containerd[1606]: time="2025-05-17T00:06:48.947497960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:48.947537 containerd[1606]: time="2025-05-17T00:06:48.947507640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.947584800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.947775280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.947927720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.947943760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.948030840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:06:48.948644 containerd[1606]: time="2025-05-17T00:06:48.948072000Z" level=info msg="metadata content store policy set" policy=shared May 17 00:06:48.952494 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:06:48.956349 containerd[1606]: time="2025-05-17T00:06:48.956295040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:06:48.956486 containerd[1606]: time="2025-05-17T00:06:48.956452560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:06:48.956528 containerd[1606]: time="2025-05-17T00:06:48.956491680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:06:48.956528 containerd[1606]: time="2025-05-17T00:06:48.956516520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:06:48.956574 containerd[1606]: time="2025-05-17T00:06:48.956532200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:06:48.957007 containerd[1606]: time="2025-05-17T00:06:48.956700320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:06:48.957094 containerd[1606]: time="2025-05-17T00:06:48.957065680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:06:48.957216 containerd[1606]: time="2025-05-17T00:06:48.957193400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:06:48.957276 containerd[1606]: time="2025-05-17T00:06:48.957216160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:06:48.957276 containerd[1606]: time="2025-05-17T00:06:48.957231800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:06:48.958928 containerd[1606]: time="2025-05-17T00:06:48.958867720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:06:48.958928 containerd[1606]: time="2025-05-17T00:06:48.958930680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959029 containerd[1606]: time="2025-05-17T00:06:48.958945880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959029 containerd[1606]: time="2025-05-17T00:06:48.958990520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959029 containerd[1606]: time="2025-05-17T00:06:48.959007280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959029 containerd[1606]: time="2025-05-17T00:06:48.959021240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959101 containerd[1606]: time="2025-05-17T00:06:48.959033800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959101 containerd[1606]: time="2025-05-17T00:06:48.959054160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:06:48.959101 containerd[1606]: time="2025-05-17T00:06:48.959077120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959101 containerd[1606]: time="2025-05-17T00:06:48.959091400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959103560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959125640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959139200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959152440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959167440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:06:48.959208 containerd[1606]: time="2025-05-17T00:06:48.959181240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960270200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960298520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960312640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960326520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960352280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960387840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960425800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960442160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960454080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:06:48.960729 containerd[1606]: time="2025-05-17T00:06:48.960663080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.960686480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.960961800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.960982000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.960993320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.961006040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.961030480Z" level=info msg="NRI interface is disabled by configuration." May 17 00:06:48.961549 containerd[1606]: time="2025-05-17T00:06:48.961046520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:06:48.965679 containerd[1606]: time="2025-05-17T00:06:48.965285880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:06:48.965679 containerd[1606]: time="2025-05-17T00:06:48.965396200Z" level=info msg="Connect containerd service" May 17 00:06:48.965679 containerd[1606]: time="2025-05-17T00:06:48.965453200Z" level=info msg="using legacy CRI server" May 17 00:06:48.965679 containerd[1606]: time="2025-05-17T00:06:48.965461400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:06:48.965679 containerd[1606]: time="2025-05-17T00:06:48.965657960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.971947400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972504400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972553960Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972653640Z" level=info msg="Start subscribing containerd event" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972694680Z" level=info msg="Start recovering state" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972774080Z" level=info msg="Start event monitor" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972787440Z" level=info msg="Start snapshots syncer" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972801080Z" level=info msg="Start cni network conf syncer for default" May 17 00:06:48.975262 containerd[1606]: time="2025-05-17T00:06:48.972810200Z" level=info msg="Start streaming server" May 17 00:06:48.973154 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:06:48.983749 containerd[1606]: time="2025-05-17T00:06:48.983704880Z" level=info msg="containerd successfully booted in 0.084391s" May 17 00:06:49.316503 tar[1592]: linux-arm64/LICENSE May 17 00:06:49.316503 tar[1592]: linux-arm64/README.md May 17 00:06:49.333957 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:06:49.676452 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:06:49.715134 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:06:49.723670 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:06:49.736348 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:06:49.736639 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:06:49.744440 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:06:49.756643 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:06:49.766678 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:06:49.769471 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:06:49.770714 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:06:49.813605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:49.813907 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:06:49.816716 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:06:49.817923 systemd[1]: Startup finished in 9.966s (kernel) + 4.996s (userspace) = 14.963s. May 17 00:06:50.393621 kubelet[1718]: E0517 00:06:50.393548 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:06:50.397665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:06:50.398186 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:00.648403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:07:00.659484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:00.776537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:00.779644 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:00.831321 kubelet[1741]: E0517 00:07:00.831261 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:00.836239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:00.836541 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:10.980769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:07:10.988572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:11.122075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:11.122076 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:11.178788 kubelet[1761]: E0517 00:07:11.178724 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:11.181478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:11.181665 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:21.230525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:07:21.237518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:21.365445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:21.369539 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:21.414142 kubelet[1781]: E0517 00:07:21.414073 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:21.418510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:21.418715 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:31.480667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:07:31.488648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:31.617495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:31.618575 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:31.665276 kubelet[1801]: E0517 00:07:31.665189 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:31.668470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:31.668660 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:33.864437 update_engine[1581]: I20250517 00:07:33.864317 1581 update_attempter.cc:509] Updating boot flags... May 17 00:07:33.924293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1818) May 17 00:07:33.974617 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1821) May 17 00:07:34.047270 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1821) May 17 00:07:41.730067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:07:41.740535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:41.867472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:41.884964 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:41.931013 kubelet[1842]: E0517 00:07:41.930952 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:41.933794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:41.933989 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:07:51.980636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:07:51.991951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:07:52.123551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:07:52.137896 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:07:52.182727 kubelet[1862]: E0517 00:07:52.182646 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:07:52.185326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:07:52.185526 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:02.230333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:08:02.246103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:02.378554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:02.384158 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:02.428202 kubelet[1882]: E0517 00:08:02.428137 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:02.431992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:02.432401 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:12.480537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:08:12.491552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:12.625467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:12.629709 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:12.676558 kubelet[1902]: E0517 00:08:12.676485 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:12.679760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:12.679997 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:22.730815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:08:22.740572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:22.858464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:22.872945 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:22.912875 kubelet[1922]: E0517 00:08:22.912818 1922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:22.914903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:22.915097 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:29.494036 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:08:29.500642 systemd[1]: Started sshd@0-49.12.39.64:22-139.178.68.195:52894.service - OpenSSH per-connection server daemon (139.178.68.195:52894). May 17 00:08:30.482811 sshd[1930]: Accepted publickey for core from 139.178.68.195 port 52894 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:30.485580 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:30.497536 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:08:30.505565 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:08:30.510119 systemd-logind[1575]: New session 1 of user core. May 17 00:08:30.523427 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:08:30.530768 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:08:30.545373 (systemd)[1936]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:08:30.653717 systemd[1936]: Queued start job for default target default.target. May 17 00:08:30.654450 systemd[1936]: Created slice app.slice - User Application Slice. May 17 00:08:30.654583 systemd[1936]: Reached target paths.target - Paths. May 17 00:08:30.654703 systemd[1936]: Reached target timers.target - Timers. May 17 00:08:30.659506 systemd[1936]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:08:30.670433 systemd[1936]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:08:30.670501 systemd[1936]: Reached target sockets.target - Sockets. May 17 00:08:30.670515 systemd[1936]: Reached target basic.target - Basic System. May 17 00:08:30.670562 systemd[1936]: Reached target default.target - Main User Target. May 17 00:08:30.670588 systemd[1936]: Startup finished in 117ms. May 17 00:08:30.671288 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:08:30.678935 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:08:31.372571 systemd[1]: Started sshd@1-49.12.39.64:22-139.178.68.195:52908.service - OpenSSH per-connection server daemon (139.178.68.195:52908). May 17 00:08:32.360757 sshd[1948]: Accepted publickey for core from 139.178.68.195 port 52908 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:32.363273 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:32.369675 systemd-logind[1575]: New session 2 of user core. May 17 00:08:32.375690 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:08:32.980050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:08:32.985560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:33.055457 sshd[1948]: pam_unix(sshd:session): session closed for user core May 17 00:08:33.065736 systemd[1]: sshd@1-49.12.39.64:22-139.178.68.195:52908.service: Deactivated successfully. May 17 00:08:33.065904 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. May 17 00:08:33.069509 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:08:33.071953 systemd-logind[1575]: Removed session 2. May 17 00:08:33.136577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:33.145078 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:33.187285 kubelet[1968]: E0517 00:08:33.187201 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:33.191669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:33.192003 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:33.232699 systemd[1]: Started sshd@2-49.12.39.64:22-139.178.68.195:52910.service - OpenSSH per-connection server daemon (139.178.68.195:52910). May 17 00:08:34.228167 sshd[1977]: Accepted publickey for core from 139.178.68.195 port 52910 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:34.229899 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:34.236764 systemd-logind[1575]: New session 3 of user core. May 17 00:08:34.243763 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:08:34.913868 sshd[1977]: pam_unix(sshd:session): session closed for user core May 17 00:08:34.918932 systemd[1]: sshd@2-49.12.39.64:22-139.178.68.195:52910.service: Deactivated successfully. May 17 00:08:34.919321 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. May 17 00:08:34.922672 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:08:34.923805 systemd-logind[1575]: Removed session 3. May 17 00:08:35.084910 systemd[1]: Started sshd@3-49.12.39.64:22-139.178.68.195:53272.service - OpenSSH per-connection server daemon (139.178.68.195:53272). May 17 00:08:36.075001 sshd[1985]: Accepted publickey for core from 139.178.68.195 port 53272 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:36.077894 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:36.084180 systemd-logind[1575]: New session 4 of user core. May 17 00:08:36.090788 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:08:36.767071 sshd[1985]: pam_unix(sshd:session): session closed for user core May 17 00:08:36.773080 systemd[1]: sshd@3-49.12.39.64:22-139.178.68.195:53272.service: Deactivated successfully. May 17 00:08:36.776561 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. May 17 00:08:36.776686 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:08:36.778707 systemd-logind[1575]: Removed session 4. May 17 00:08:36.930488 systemd[1]: Started sshd@4-49.12.39.64:22-139.178.68.195:53286.service - OpenSSH per-connection server daemon (139.178.68.195:53286). May 17 00:08:37.912839 sshd[1993]: Accepted publickey for core from 139.178.68.195 port 53286 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:37.914398 sshd[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:37.920226 systemd-logind[1575]: New session 5 of user core. May 17 00:08:37.925817 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:08:38.441430 sudo[1997]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:08:38.441724 sudo[1997]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:38.459091 sudo[1997]: pam_unix(sudo:session): session closed for user root May 17 00:08:38.618725 sshd[1993]: pam_unix(sshd:session): session closed for user core May 17 00:08:38.623672 systemd[1]: sshd@4-49.12.39.64:22-139.178.68.195:53286.service: Deactivated successfully. May 17 00:08:38.627902 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:08:38.629944 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. May 17 00:08:38.631133 systemd-logind[1575]: Removed session 5. May 17 00:08:38.785698 systemd[1]: Started sshd@5-49.12.39.64:22-139.178.68.195:53298.service - OpenSSH per-connection server daemon (139.178.68.195:53298). May 17 00:08:39.767485 sshd[2002]: Accepted publickey for core from 139.178.68.195 port 53298 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:39.770784 sshd[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:39.777295 systemd-logind[1575]: New session 6 of user core. May 17 00:08:39.793400 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:08:40.292065 sudo[2007]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:08:40.292438 sudo[2007]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:40.296315 sudo[2007]: pam_unix(sudo:session): session closed for user root May 17 00:08:40.301237 sudo[2006]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:08:40.301925 sudo[2006]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:40.324177 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:08:40.326743 auditctl[2010]: No rules May 17 00:08:40.327289 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:08:40.327553 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:08:40.331638 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:08:40.359819 augenrules[2029]: No rules May 17 00:08:40.361654 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:08:40.363147 sudo[2006]: pam_unix(sudo:session): session closed for user root May 17 00:08:40.525699 sshd[2002]: pam_unix(sshd:session): session closed for user core May 17 00:08:40.530858 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. May 17 00:08:40.530926 systemd[1]: sshd@5-49.12.39.64:22-139.178.68.195:53298.service: Deactivated successfully. May 17 00:08:40.535100 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:08:40.536122 systemd-logind[1575]: Removed session 6. May 17 00:08:40.704792 systemd[1]: Started sshd@6-49.12.39.64:22-139.178.68.195:53308.service - OpenSSH per-connection server daemon (139.178.68.195:53308). May 17 00:08:41.700236 sshd[2038]: Accepted publickey for core from 139.178.68.195 port 53308 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:08:41.702386 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:41.708023 systemd-logind[1575]: New session 7 of user core. May 17 00:08:41.714786 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:08:42.231204 sudo[2042]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:08:42.231493 sudo[2042]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:08:42.521036 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:08:42.522225 (dockerd)[2058]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:08:42.769135 dockerd[2058]: time="2025-05-17T00:08:42.769063538Z" level=info msg="Starting up" May 17 00:08:42.843877 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport857351034-merged.mount: Deactivated successfully. May 17 00:08:42.862773 systemd[1]: var-lib-docker-metacopy\x2dcheck33094050-merged.mount: Deactivated successfully. May 17 00:08:42.873010 dockerd[2058]: time="2025-05-17T00:08:42.872699984Z" level=info msg="Loading containers: start." May 17 00:08:42.985313 kernel: Initializing XFRM netlink socket May 17 00:08:43.065880 systemd-networkd[1246]: docker0: Link UP May 17 00:08:43.091895 dockerd[2058]: time="2025-05-17T00:08:43.091838635Z" level=info msg="Loading containers: done." May 17 00:08:43.110880 dockerd[2058]: time="2025-05-17T00:08:43.110433103Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:08:43.110880 dockerd[2058]: time="2025-05-17T00:08:43.110566223Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:08:43.110880 dockerd[2058]: time="2025-05-17T00:08:43.110706143Z" level=info msg="Daemon has completed initialization" May 17 00:08:43.149453 dockerd[2058]: time="2025-05-17T00:08:43.149180160Z" level=info msg="API listen on /run/docker.sock" May 17 00:08:43.149929 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:08:43.230194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:08:43.240006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:43.362617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:43.363134 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:43.410254 kubelet[2206]: E0517 00:08:43.410188 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:43.414406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:43.414678 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:44.245199 containerd[1606]: time="2025-05-17T00:08:44.245133377Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:08:44.931686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350047823.mount: Deactivated successfully. May 17 00:08:46.694310 containerd[1606]: time="2025-05-17T00:08:46.693160367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:46.695323 containerd[1606]: time="2025-05-17T00:08:46.695277572Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25652066" May 17 00:08:46.696503 containerd[1606]: time="2025-05-17T00:08:46.696386695Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:46.699454 containerd[1606]: time="2025-05-17T00:08:46.699394822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:46.701561 containerd[1606]: time="2025-05-17T00:08:46.701217346Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 2.456030089s" May 17 00:08:46.701561 containerd[1606]: time="2025-05-17T00:08:46.701305706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:08:46.703339 containerd[1606]: time="2025-05-17T00:08:46.703281390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:08:48.613494 containerd[1606]: time="2025-05-17T00:08:48.612392014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.614641 containerd[1606]: time="2025-05-17T00:08:48.614614500Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459548" May 17 00:08:48.615920 containerd[1606]: time="2025-05-17T00:08:48.615879383Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.619299 containerd[1606]: time="2025-05-17T00:08:48.619270633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:48.621432 containerd[1606]: time="2025-05-17T00:08:48.621405038Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.918038248s" May 17 00:08:48.621554 containerd[1606]: time="2025-05-17T00:08:48.621523399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:08:48.622185 containerd[1606]: time="2025-05-17T00:08:48.622114760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:08:50.225327 containerd[1606]: time="2025-05-17T00:08:50.225264130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.226625 containerd[1606]: time="2025-05-17T00:08:50.226415614Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125299" May 17 00:08:50.227497 containerd[1606]: time="2025-05-17T00:08:50.227444177Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.231056 containerd[1606]: time="2025-05-17T00:08:50.231015668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:50.234152 containerd[1606]: time="2025-05-17T00:08:50.233190195Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.610870794s" May 17 00:08:50.234152 containerd[1606]: time="2025-05-17T00:08:50.233271276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:08:50.237214 containerd[1606]: time="2025-05-17T00:08:50.237177968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:08:51.246004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724889338.mount: Deactivated successfully. May 17 00:08:51.964300 containerd[1606]: time="2025-05-17T00:08:51.964233691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.966118 containerd[1606]: time="2025-05-17T00:08:51.965706096Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871401" May 17 00:08:51.967195 containerd[1606]: time="2025-05-17T00:08:51.967143380Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.969332 containerd[1606]: time="2025-05-17T00:08:51.969274148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:51.970269 containerd[1606]: time="2025-05-17T00:08:51.970112631Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.732894943s" May 17 00:08:51.970269 containerd[1606]: time="2025-05-17T00:08:51.970144991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:08:51.970725 containerd[1606]: time="2025-05-17T00:08:51.970682713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:08:52.550041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116872819.mount: Deactivated successfully. May 17 00:08:53.281747 containerd[1606]: time="2025-05-17T00:08:53.281667902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.283236 containerd[1606]: time="2025-05-17T00:08:53.283203468Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" May 17 00:08:53.283898 containerd[1606]: time="2025-05-17T00:08:53.283556549Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.286929 containerd[1606]: time="2025-05-17T00:08:53.286889442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.288350 containerd[1606]: time="2025-05-17T00:08:53.288197327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.317389974s" May 17 00:08:53.288350 containerd[1606]: time="2025-05-17T00:08:53.288235727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:08:53.289537 containerd[1606]: time="2025-05-17T00:08:53.289306611Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:08:53.480120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:08:53.486580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:08:53.618491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:08:53.623272 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:08:53.663759 kubelet[2349]: E0517 00:08:53.663684 2349 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:08:53.667803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:08:53.668152 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:08:53.810770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117521095.mount: Deactivated successfully. May 17 00:08:53.817308 containerd[1606]: time="2025-05-17T00:08:53.816635744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.818187 containerd[1606]: time="2025-05-17T00:08:53.818042869Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:08:53.819020 containerd[1606]: time="2025-05-17T00:08:53.818943472Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.821305 containerd[1606]: time="2025-05-17T00:08:53.821213801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:53.822409 containerd[1606]: time="2025-05-17T00:08:53.822064084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.723033ms" May 17 00:08:53.822409 containerd[1606]: time="2025-05-17T00:08:53.822094964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:08:53.822705 containerd[1606]: time="2025-05-17T00:08:53.822683207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:08:54.442493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454656146.mount: Deactivated successfully. May 17 00:08:57.469579 containerd[1606]: time="2025-05-17T00:08:57.469487724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.471263 containerd[1606]: time="2025-05-17T00:08:57.471168252Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" May 17 00:08:57.472180 containerd[1606]: time="2025-05-17T00:08:57.472136856Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.478277 containerd[1606]: time="2025-05-17T00:08:57.476773437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:08:57.478474 containerd[1606]: time="2025-05-17T00:08:57.478441365Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.655144156s" May 17 00:08:57.478567 containerd[1606]: time="2025-05-17T00:08:57.478550885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:09:03.473922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:03.480803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:03.522048 systemd[1]: Reloading requested from client PID 2445 ('systemctl') (unit session-7.scope)... May 17 00:09:03.522066 systemd[1]: Reloading... May 17 00:09:03.658316 zram_generator::config[2485]: No configuration found. May 17 00:09:03.760671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:03.829325 systemd[1]: Reloading finished in 306 ms. May 17 00:09:03.891956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:03.893647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:03.899787 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:09:03.900098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:03.904432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:04.045645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:04.047452 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:04.091950 kubelet[2548]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:04.091950 kubelet[2548]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:09:04.091950 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:04.092407 kubelet[2548]: I0517 00:09:04.092002 2548 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:04.787988 kubelet[2548]: I0517 00:09:04.787941 2548 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:09:04.787988 kubelet[2548]: I0517 00:09:04.787982 2548 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:04.788587 kubelet[2548]: I0517 00:09:04.788457 2548 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:09:04.817699 kubelet[2548]: E0517 00:09:04.817642 2548 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.12.39.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.819383 kubelet[2548]: I0517 00:09:04.819206 2548 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:04.826995 kubelet[2548]: E0517 00:09:04.826930 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:04.826995 kubelet[2548]: I0517 00:09:04.826985 2548 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:04.831040 kubelet[2548]: I0517 00:09:04.831000 2548 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:04.832420 kubelet[2548]: I0517 00:09:04.832378 2548 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:09:04.832620 kubelet[2548]: I0517 00:09:04.832574 2548 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:04.832800 kubelet[2548]: I0517 00:09:04.832610 2548 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-bb4b333066","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:09:04.832900 kubelet[2548]: I0517 00:09:04.832859 2548 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:04.832900 kubelet[2548]: I0517 00:09:04.832868 2548 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:09:04.833139 kubelet[2548]: I0517 00:09:04.833109 2548 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:04.837530 kubelet[2548]: I0517 00:09:04.837476 2548 kubelet.go:408] "Attempting to sync node with API server" May 17 00:09:04.837530 kubelet[2548]: I0517 00:09:04.837532 2548 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:04.837644 kubelet[2548]: I0517 00:09:04.837560 2548 kubelet.go:314] "Adding apiserver pod source" May 17 00:09:04.837644 kubelet[2548]: I0517 00:09:04.837577 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:04.840474 kubelet[2548]: W0517 00:09:04.840332 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.39.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-bb4b333066&limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:04.842078 kubelet[2548]: E0517 00:09:04.840708 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.12.39.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-bb4b333066&limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.842222 kubelet[2548]: W0517 00:09:04.842124 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.39.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:04.842222 kubelet[2548]: E0517 00:09:04.842172 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.12.39.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.842819 kubelet[2548]: I0517 00:09:04.842782 2548 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:04.843594 kubelet[2548]: I0517 00:09:04.843555 2548 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:09:04.843706 kubelet[2548]: W0517 00:09:04.843680 2548 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:09:04.846307 kubelet[2548]: I0517 00:09:04.846266 2548 server.go:1274] "Started kubelet" May 17 00:09:04.855016 kubelet[2548]: I0517 00:09:04.854980 2548 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:04.856096 kubelet[2548]: I0517 00:09:04.856038 2548 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:04.858001 kubelet[2548]: I0517 00:09:04.857982 2548 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:09:04.858406 kubelet[2548]: I0517 00:09:04.858326 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:04.858526 kubelet[2548]: E0517 00:09:04.858507 2548 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-bb4b333066\" not found" May 17 00:09:04.863744 kubelet[2548]: E0517 00:09:04.852428 2548 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.39.64:6443/api/v1/namespaces/default/events\": dial tcp 49.12.39.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-bb4b333066.184027eacdac941f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-bb4b333066,UID:ci-4081-3-3-n-bb4b333066,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-bb4b333066,},FirstTimestamp:2025-05-17 00:09:04.846222367 +0000 UTC m=+0.792486279,LastTimestamp:2025-05-17 00:09:04.846222367 +0000 UTC m=+0.792486279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-bb4b333066,}" May 17 00:09:04.865069 kubelet[2548]: E0517 00:09:04.865027 2548 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.39.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-bb4b333066?timeout=10s\": dial tcp 49.12.39.64:6443: connect: connection refused" interval="200ms" May 17 00:09:04.867868 kubelet[2548]: I0517 00:09:04.867840 2548 server.go:449] "Adding debug handlers to kubelet server" May 17 00:09:04.868022 kubelet[2548]: I0517 00:09:04.867986 2548 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:04.868022 kubelet[2548]: I0517 00:09:04.867923 2548 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:09:04.869268 kubelet[2548]: W0517 00:09:04.868592 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.39.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:04.869268 kubelet[2548]: E0517 00:09:04.868659 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.39.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.869268 kubelet[2548]: I0517 00:09:04.867811 2548 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:04.869698 kubelet[2548]: I0517 00:09:04.869672 2548 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:04.870658 kubelet[2548]: I0517 00:09:04.870233 2548 factory.go:221] Registration of the systemd container factory successfully May 17 00:09:04.870869 kubelet[2548]: I0517 00:09:04.870847 2548 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:04.872871 kubelet[2548]: E0517 00:09:04.872850 2548 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:09:04.873205 kubelet[2548]: I0517 00:09:04.873188 2548 factory.go:221] Registration of the containerd container factory successfully May 17 00:09:04.880265 kubelet[2548]: I0517 00:09:04.879591 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:09:04.881299 kubelet[2548]: I0517 00:09:04.880614 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:09:04.881299 kubelet[2548]: I0517 00:09:04.880645 2548 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:09:04.881299 kubelet[2548]: I0517 00:09:04.880668 2548 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:09:04.881299 kubelet[2548]: E0517 00:09:04.880716 2548 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:04.887676 kubelet[2548]: W0517 00:09:04.887619 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.39.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:04.887818 kubelet[2548]: E0517 00:09:04.887679 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.39.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:04.912709 kubelet[2548]: I0517 00:09:04.912682 2548 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:09:04.912849 kubelet[2548]: I0517 00:09:04.912838 2548 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:09:04.912904 kubelet[2548]: I0517 00:09:04.912896 2548 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:04.915191 kubelet[2548]: I0517 00:09:04.915169 2548 policy_none.go:49] "None policy: Start" May 17 00:09:04.916146 kubelet[2548]: I0517 00:09:04.916113 2548 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:09:04.916146 kubelet[2548]: I0517 00:09:04.916145 2548 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:04.923816 kubelet[2548]: I0517 00:09:04.922842 2548 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:09:04.923816 kubelet[2548]: I0517 00:09:04.923096 2548 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:04.923816 kubelet[2548]: I0517 00:09:04.923111 2548 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:04.924350 kubelet[2548]: I0517 00:09:04.924325 2548 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:04.925656 kubelet[2548]: E0517 00:09:04.925632 2548 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-bb4b333066\" not found" May 17 00:09:05.027235 kubelet[2548]: I0517 00:09:05.027159 2548 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.027944 kubelet[2548]: E0517 00:09:05.027901 2548 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.12.39.64:6443/api/v1/nodes\": dial tcp 49.12.39.64:6443: connect: connection refused" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.066721 kubelet[2548]: E0517 00:09:05.066528 2548 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.39.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-bb4b333066?timeout=10s\": dial tcp 49.12.39.64:6443: connect: connection refused" interval="400ms" May 17 00:09:05.070360 kubelet[2548]: I0517 00:09:05.070302 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070559 kubelet[2548]: I0517 00:09:05.070382 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf0d16db9c4bb46d3b3d0c4bd1754291-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-bb4b333066\" (UID: \"cf0d16db9c4bb46d3b3d0c4bd1754291\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070559 kubelet[2548]: I0517 00:09:05.070422 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070559 kubelet[2548]: I0517 00:09:05.070456 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070649 kubelet[2548]: I0517 00:09:05.070540 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070649 kubelet[2548]: I0517 00:09:05.070604 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070649 kubelet[2548]: I0517 00:09:05.070640 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070733 kubelet[2548]: I0517 00:09:05.070673 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.070733 kubelet[2548]: I0517 00:09:05.070705 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:05.231399 kubelet[2548]: I0517 00:09:05.231362 2548 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.232229 kubelet[2548]: E0517 00:09:05.231779 2548 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.12.39.64:6443/api/v1/nodes\": dial tcp 49.12.39.64:6443: connect: connection refused" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.291907 containerd[1606]: time="2025-05-17T00:09:05.291624483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-bb4b333066,Uid:4974a8a00af0fbebcb104d775480b348,Namespace:kube-system,Attempt:0,}" May 17 00:09:05.297768 containerd[1606]: time="2025-05-17T00:09:05.297707119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-bb4b333066,Uid:68a6fc5f160b78f289e1360a7acb8a01,Namespace:kube-system,Attempt:0,}" May 17 00:09:05.298325 containerd[1606]: time="2025-05-17T00:09:05.298142321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-bb4b333066,Uid:cf0d16db9c4bb46d3b3d0c4bd1754291,Namespace:kube-system,Attempt:0,}" May 17 00:09:05.456225 kubelet[2548]: E0517 00:09:05.455714 2548 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.39.64:6443/api/v1/namespaces/default/events\": dial tcp 49.12.39.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-bb4b333066.184027eacdac941f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-bb4b333066,UID:ci-4081-3-3-n-bb4b333066,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-bb4b333066,},FirstTimestamp:2025-05-17 00:09:04.846222367 +0000 UTC m=+0.792486279,LastTimestamp:2025-05-17 00:09:04.846222367 +0000 UTC m=+0.792486279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-bb4b333066,}" May 17 00:09:05.467310 kubelet[2548]: E0517 00:09:05.467214 2548 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.39.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-bb4b333066?timeout=10s\": dial tcp 49.12.39.64:6443: connect: connection refused" interval="800ms" May 17 00:09:05.636478 kubelet[2548]: I0517 00:09:05.636161 2548 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.637778 kubelet[2548]: E0517 00:09:05.637738 2548 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.12.39.64:6443/api/v1/nodes\": dial tcp 49.12.39.64:6443: connect: connection refused" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:05.818610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190638589.mount: Deactivated successfully. May 17 00:09:05.828222 containerd[1606]: time="2025-05-17T00:09:05.828117069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.830128 containerd[1606]: time="2025-05-17T00:09:05.830073600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.830666 containerd[1606]: time="2025-05-17T00:09:05.830524963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:05.831951 containerd[1606]: time="2025-05-17T00:09:05.831880690Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.833560 containerd[1606]: time="2025-05-17T00:09:05.833400059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:09:05.835115 containerd[1606]: time="2025-05-17T00:09:05.834941228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:09:05.835115 containerd[1606]: time="2025-05-17T00:09:05.835042229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.838849 containerd[1606]: time="2025-05-17T00:09:05.838799010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:09:05.840372 containerd[1606]: time="2025-05-17T00:09:05.840070498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.849176ms" May 17 00:09:05.841742 containerd[1606]: time="2025-05-17T00:09:05.841702147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.862148ms" May 17 00:09:05.843745 containerd[1606]: time="2025-05-17T00:09:05.843155436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.400512ms" May 17 00:09:05.853103 kubelet[2548]: W0517 00:09:05.852946 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.39.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-bb4b333066&limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:05.853103 kubelet[2548]: E0517 00:09:05.853034 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.12.39.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-bb4b333066&limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.916628 kubelet[2548]: W0517 00:09:05.916564 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.39.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:05.917354 kubelet[2548]: E0517 00:09:05.917314 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.39.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.940436 kubelet[2548]: W0517 00:09:05.940328 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.39.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:05.940436 kubelet[2548]: E0517 00:09:05.940400 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.12.39.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:05.958521 containerd[1606]: time="2025-05-17T00:09:05.958373422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.958717 containerd[1606]: time="2025-05-17T00:09:05.958471463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.958717 containerd[1606]: time="2025-05-17T00:09:05.958541783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.958918 containerd[1606]: time="2025-05-17T00:09:05.958725985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.962777 containerd[1606]: time="2025-05-17T00:09:05.962627767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.963667 containerd[1606]: time="2025-05-17T00:09:05.962691327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.963767 containerd[1606]: time="2025-05-17T00:09:05.963639253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.963880 containerd[1606]: time="2025-05-17T00:09:05.963788814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.964977 containerd[1606]: time="2025-05-17T00:09:05.964370457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:05.965472 containerd[1606]: time="2025-05-17T00:09:05.965414903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:05.965837 containerd[1606]: time="2025-05-17T00:09:05.965712585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:05.966134 containerd[1606]: time="2025-05-17T00:09:05.966071467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:06.037604 containerd[1606]: time="2025-05-17T00:09:06.037361004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-bb4b333066,Uid:68a6fc5f160b78f289e1360a7acb8a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c2cb3e1ae1ed000f99894e0fe2d01b78b4ddf11df5f223bb0bc2214bae58133\"" May 17 00:09:06.042771 containerd[1606]: time="2025-05-17T00:09:06.042622756Z" level=info msg="CreateContainer within sandbox \"2c2cb3e1ae1ed000f99894e0fe2d01b78b4ddf11df5f223bb0bc2214bae58133\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:09:06.069264 containerd[1606]: time="2025-05-17T00:09:06.069023232Z" level=info msg="CreateContainer within sandbox \"2c2cb3e1ae1ed000f99894e0fe2d01b78b4ddf11df5f223bb0bc2214bae58133\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473\"" May 17 00:09:06.070920 containerd[1606]: time="2025-05-17T00:09:06.070848123Z" level=info msg="StartContainer for \"cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473\"" May 17 00:09:06.076947 containerd[1606]: time="2025-05-17T00:09:06.076813598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-bb4b333066,Uid:4974a8a00af0fbebcb104d775480b348,Namespace:kube-system,Attempt:0,} returns sandbox id \"16e02d3b14c2325e4dbeb13e4fae2de2a37f824ee96e2834f99f117dc3894cb0\"" May 17 00:09:06.080222 containerd[1606]: time="2025-05-17T00:09:06.080177778Z" level=info msg="CreateContainer within sandbox \"16e02d3b14c2325e4dbeb13e4fae2de2a37f824ee96e2834f99f117dc3894cb0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:09:06.084622 containerd[1606]: time="2025-05-17T00:09:06.084516564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-bb4b333066,Uid:cf0d16db9c4bb46d3b3d0c4bd1754291,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e5015015e1b5ba79c7e20e5f94014a5f80e822c700060142e10d2c7e8f490fd\"" May 17 00:09:06.089293 containerd[1606]: time="2025-05-17T00:09:06.089122671Z" level=info msg="CreateContainer within sandbox \"2e5015015e1b5ba79c7e20e5f94014a5f80e822c700060142e10d2c7e8f490fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:09:06.091759 kubelet[2548]: W0517 00:09:06.091690 2548 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.39.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.39.64:6443: connect: connection refused May 17 00:09:06.091926 kubelet[2548]: E0517 00:09:06.091769 2548 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.39.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.39.64:6443: connect: connection refused" logger="UnhandledError" May 17 00:09:06.098871 containerd[1606]: time="2025-05-17T00:09:06.098822088Z" level=info msg="CreateContainer within sandbox \"16e02d3b14c2325e4dbeb13e4fae2de2a37f824ee96e2834f99f117dc3894cb0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"498abf2eb2d4f47e115a303e676627549421a848723f0e7529b1be3fff7d7ccc\"" May 17 00:09:06.099823 containerd[1606]: time="2025-05-17T00:09:06.099786254Z" level=info msg="StartContainer for \"498abf2eb2d4f47e115a303e676627549421a848723f0e7529b1be3fff7d7ccc\"" May 17 00:09:06.159961 containerd[1606]: time="2025-05-17T00:09:06.159826489Z" level=info msg="CreateContainer within sandbox \"2e5015015e1b5ba79c7e20e5f94014a5f80e822c700060142e10d2c7e8f490fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2\"" May 17 00:09:06.164343 containerd[1606]: time="2025-05-17T00:09:06.163598472Z" level=info msg="StartContainer for \"32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2\"" May 17 00:09:06.178495 containerd[1606]: time="2025-05-17T00:09:06.178433720Z" level=info msg="StartContainer for \"cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473\" returns successfully" May 17 00:09:06.195978 containerd[1606]: time="2025-05-17T00:09:06.195073938Z" level=info msg="StartContainer for \"498abf2eb2d4f47e115a303e676627549421a848723f0e7529b1be3fff7d7ccc\" returns successfully" May 17 00:09:06.269465 kubelet[2548]: E0517 00:09:06.269408 2548 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.39.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-bb4b333066?timeout=10s\": dial tcp 49.12.39.64:6443: connect: connection refused" interval="1.6s" May 17 00:09:06.275316 containerd[1606]: time="2025-05-17T00:09:06.274612169Z" level=info msg="StartContainer for \"32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2\" returns successfully" May 17 00:09:06.443340 kubelet[2548]: I0517 00:09:06.441522 2548 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:08.785320 kubelet[2548]: E0517 00:09:08.784614 2548 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-bb4b333066\" not found" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:08.869473 kubelet[2548]: I0517 00:09:08.869431 2548 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:08.869634 kubelet[2548]: E0517 00:09:08.869515 2548 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-bb4b333066\": node \"ci-4081-3-3-n-bb4b333066\" not found" May 17 00:09:08.902978 kubelet[2548]: E0517 00:09:08.902920 2548 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-bb4b333066\" not found" May 17 00:09:09.844283 kubelet[2548]: I0517 00:09:09.843368 2548 apiserver.go:52] "Watching apiserver" May 17 00:09:09.869187 kubelet[2548]: I0517 00:09:09.869150 2548 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:09:11.003964 systemd[1]: Reloading requested from client PID 2820 ('systemctl') (unit session-7.scope)... May 17 00:09:11.003985 systemd[1]: Reloading... May 17 00:09:11.132272 zram_generator::config[2860]: No configuration found. May 17 00:09:11.264177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:09:11.345827 systemd[1]: Reloading finished in 341 ms. May 17 00:09:11.381744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:11.391369 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:09:11.391932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:11.398948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:09:11.526519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:09:11.531054 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:09:11.578818 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:11.579434 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:09:11.579434 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:09:11.579434 kubelet[2915]: I0517 00:09:11.579265 2915 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:09:11.590056 kubelet[2915]: I0517 00:09:11.589944 2915 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:09:11.590056 kubelet[2915]: I0517 00:09:11.589975 2915 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:09:11.590668 kubelet[2915]: I0517 00:09:11.590650 2915 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:09:11.592666 kubelet[2915]: I0517 00:09:11.592412 2915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:09:11.595383 kubelet[2915]: I0517 00:09:11.595229 2915 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:09:11.602553 kubelet[2915]: E0517 00:09:11.602469 2915 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:09:11.602983 kubelet[2915]: I0517 00:09:11.602708 2915 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:09:11.605155 kubelet[2915]: I0517 00:09:11.605119 2915 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:09:11.605915 kubelet[2915]: I0517 00:09:11.605887 2915 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:09:11.606123 kubelet[2915]: I0517 00:09:11.606070 2915 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:09:11.606846 kubelet[2915]: I0517 00:09:11.606121 2915 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-bb4b333066","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:09:11.606846 kubelet[2915]: I0517 00:09:11.606572 2915 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:09:11.606846 kubelet[2915]: I0517 00:09:11.606593 2915 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:09:11.606846 kubelet[2915]: I0517 00:09:11.606655 2915 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:11.607035 kubelet[2915]: I0517 00:09:11.606865 2915 kubelet.go:408] "Attempting to sync node with API server" May 17 00:09:11.607035 kubelet[2915]: I0517 00:09:11.606927 2915 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:09:11.607035 kubelet[2915]: I0517 00:09:11.606959 2915 kubelet.go:314] "Adding apiserver pod source" May 17 00:09:11.607035 kubelet[2915]: I0517 00:09:11.606983 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:09:11.628278 kubelet[2915]: I0517 00:09:11.628189 2915 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:09:11.629097 kubelet[2915]: I0517 00:09:11.629077 2915 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:09:11.629722 kubelet[2915]: I0517 00:09:11.629701 2915 server.go:1274] "Started kubelet" May 17 00:09:11.641449 kubelet[2915]: E0517 00:09:11.641094 2915 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:09:11.643116 kubelet[2915]: I0517 00:09:11.643061 2915 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:09:11.644634 kubelet[2915]: I0517 00:09:11.643702 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:09:11.645794 kubelet[2915]: I0517 00:09:11.645765 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:09:11.648688 kubelet[2915]: I0517 00:09:11.643968 2915 server.go:449] "Adding debug handlers to kubelet server" May 17 00:09:11.649699 kubelet[2915]: I0517 00:09:11.644003 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:09:11.650311 kubelet[2915]: I0517 00:09:11.649896 2915 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:09:11.653273 kubelet[2915]: I0517 00:09:11.650747 2915 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:09:11.653273 kubelet[2915]: I0517 00:09:11.650873 2915 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:09:11.653273 kubelet[2915]: I0517 00:09:11.650998 2915 reconciler.go:26] "Reconciler: start to sync state" May 17 00:09:11.653273 kubelet[2915]: I0517 00:09:11.652258 2915 factory.go:221] Registration of the systemd container factory successfully May 17 00:09:11.653273 kubelet[2915]: I0517 00:09:11.652424 2915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:09:11.654284 kubelet[2915]: I0517 00:09:11.654211 2915 factory.go:221] Registration of the containerd container factory successfully May 17 00:09:11.671679 kubelet[2915]: I0517 00:09:11.671505 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:09:11.674794 kubelet[2915]: I0517 00:09:11.674633 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:09:11.674794 kubelet[2915]: I0517 00:09:11.674787 2915 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:09:11.674948 kubelet[2915]: I0517 00:09:11.674814 2915 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:09:11.674948 kubelet[2915]: E0517 00:09:11.674876 2915 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:09:11.720469 kubelet[2915]: I0517 00:09:11.720429 2915 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:09:11.720469 kubelet[2915]: I0517 00:09:11.720456 2915 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:09:11.720469 kubelet[2915]: I0517 00:09:11.720475 2915 state_mem.go:36] "Initialized new in-memory state store" May 17 00:09:11.720779 kubelet[2915]: I0517 00:09:11.720640 2915 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:09:11.720779 kubelet[2915]: I0517 00:09:11.720650 2915 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:09:11.720779 kubelet[2915]: I0517 00:09:11.720669 2915 policy_none.go:49] "None policy: Start" May 17 00:09:11.721662 kubelet[2915]: I0517 00:09:11.721637 2915 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:09:11.721662 kubelet[2915]: I0517 00:09:11.721664 2915 state_mem.go:35] "Initializing new in-memory state store" May 17 00:09:11.721856 kubelet[2915]: I0517 00:09:11.721837 2915 state_mem.go:75] "Updated machine memory state" May 17 00:09:11.723811 kubelet[2915]: I0517 00:09:11.723752 2915 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:09:11.724031 kubelet[2915]: I0517 00:09:11.724014 2915 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:09:11.724137 kubelet[2915]: I0517 00:09:11.724033 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:09:11.728613 kubelet[2915]: I0517 00:09:11.728084 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:09:11.788904 kubelet[2915]: E0517 00:09:11.788769 2915 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.790137 kubelet[2915]: E0517 00:09:11.789978 2915 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-n-bb4b333066\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.832466 kubelet[2915]: I0517 00:09:11.832165 2915 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:11.843723 kubelet[2915]: I0517 00:09:11.843584 2915 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:11.844555 kubelet[2915]: I0517 00:09:11.844338 2915 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-bb4b333066" May 17 00:09:11.855260 kubelet[2915]: I0517 00:09:11.855115 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.855597 kubelet[2915]: I0517 00:09:11.855532 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.855597 kubelet[2915]: I0517 00:09:11.855567 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.855915 kubelet[2915]: I0517 00:09:11.855805 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.856003 kubelet[2915]: I0517 00:09:11.855978 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.856231 kubelet[2915]: I0517 00:09:11.856166 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68a6fc5f160b78f289e1360a7acb8a01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-bb4b333066\" (UID: \"68a6fc5f160b78f289e1360a7acb8a01\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.856231 kubelet[2915]: I0517 00:09:11.856212 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf0d16db9c4bb46d3b3d0c4bd1754291-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-bb4b333066\" (UID: \"cf0d16db9c4bb46d3b3d0c4bd1754291\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.856539 kubelet[2915]: I0517 00:09:11.856514 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:11.857312 kubelet[2915]: I0517 00:09:11.857142 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4974a8a00af0fbebcb104d775480b348-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" (UID: \"4974a8a00af0fbebcb104d775480b348\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:12.003329 sudo[2948]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:09:12.003648 sudo[2948]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:09:12.477518 sudo[2948]: pam_unix(sudo:session): session closed for user root May 17 00:09:12.608048 kubelet[2915]: I0517 00:09:12.608005 2915 apiserver.go:52] "Watching apiserver" May 17 00:09:12.651316 kubelet[2915]: I0517 00:09:12.651253 2915 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:09:12.712271 kubelet[2915]: E0517 00:09:12.710349 2915 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-bb4b333066\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" May 17 00:09:12.730160 kubelet[2915]: I0517 00:09:12.730101 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-bb4b333066" podStartSLOduration=2.730054002 podStartE2EDuration="2.730054002s" podCreationTimestamp="2025-05-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.729137676 +0000 UTC m=+1.193835951" watchObservedRunningTime="2025-05-17 00:09:12.730054002 +0000 UTC m=+1.194752277" May 17 00:09:12.758300 kubelet[2915]: I0517 00:09:12.757964 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-bb4b333066" podStartSLOduration=2.7579468670000002 podStartE2EDuration="2.757946867s" podCreationTimestamp="2025-05-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.744127055 +0000 UTC m=+1.208825370" watchObservedRunningTime="2025-05-17 00:09:12.757946867 +0000 UTC m=+1.222645142" May 17 00:09:14.510936 sudo[2042]: pam_unix(sudo:session): session closed for user root May 17 00:09:14.672763 sshd[2038]: pam_unix(sshd:session): session closed for user core May 17 00:09:14.676920 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. May 17 00:09:14.678865 systemd[1]: sshd@6-49.12.39.64:22-139.178.68.195:53308.service: Deactivated successfully. May 17 00:09:14.682776 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:09:14.684273 systemd-logind[1575]: Removed session 7. May 17 00:09:15.727731 kubelet[2915]: I0517 00:09:15.727669 2915 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:09:15.729429 containerd[1606]: time="2025-05-17T00:09:15.729389884Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:09:15.730235 kubelet[2915]: I0517 00:09:15.729682 2915 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:09:16.546962 kubelet[2915]: I0517 00:09:16.546869 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-bb4b333066" podStartSLOduration=5.546844533 podStartE2EDuration="5.546844533s" podCreationTimestamp="2025-05-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:12.758991394 +0000 UTC m=+1.223689669" watchObservedRunningTime="2025-05-17 00:09:16.546844533 +0000 UTC m=+5.011542808" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586059 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-cgroup\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586124 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53edaf92-1744-4952-a607-778ec7857b1f-clustermesh-secrets\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586148 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53edaf92-1744-4952-a607-778ec7857b1f-cilium-config-path\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586165 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-hubble-tls\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586189 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cni-path\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.586701 kubelet[2915]: I0517 00:09:16.586227 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-run\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586265 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-etc-cni-netd\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586282 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-lib-modules\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586298 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-kernel\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586343 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abd05ea7-6aa8-4e9c-88c6-bcdcbe922455-xtables-lock\") pod \"kube-proxy-9fc5n\" (UID: \"abd05ea7-6aa8-4e9c-88c6-bcdcbe922455\") " pod="kube-system/kube-proxy-9fc5n" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586360 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abd05ea7-6aa8-4e9c-88c6-bcdcbe922455-lib-modules\") pod \"kube-proxy-9fc5n\" (UID: \"abd05ea7-6aa8-4e9c-88c6-bcdcbe922455\") " pod="kube-system/kube-proxy-9fc5n" May 17 00:09:16.587142 kubelet[2915]: I0517 00:09:16.586375 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-hostproc\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587283 kubelet[2915]: I0517 00:09:16.586390 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-bpf-maps\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587283 kubelet[2915]: I0517 00:09:16.586416 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-net\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587283 kubelet[2915]: I0517 00:09:16.586435 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8c9\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-kube-api-access-jr8c9\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.587283 kubelet[2915]: I0517 00:09:16.586449 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44952\" (UniqueName: \"kubernetes.io/projected/abd05ea7-6aa8-4e9c-88c6-bcdcbe922455-kube-api-access-44952\") pod \"kube-proxy-9fc5n\" (UID: \"abd05ea7-6aa8-4e9c-88c6-bcdcbe922455\") " pod="kube-system/kube-proxy-9fc5n" May 17 00:09:16.587283 kubelet[2915]: I0517 00:09:16.586467 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abd05ea7-6aa8-4e9c-88c6-bcdcbe922455-kube-proxy\") pod \"kube-proxy-9fc5n\" (UID: \"abd05ea7-6aa8-4e9c-88c6-bcdcbe922455\") " pod="kube-system/kube-proxy-9fc5n" May 17 00:09:16.587387 kubelet[2915]: I0517 00:09:16.586536 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-xtables-lock\") pod \"cilium-rskwq\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " pod="kube-system/cilium-rskwq" May 17 00:09:16.857748 containerd[1606]: time="2025-05-17T00:09:16.857613041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9fc5n,Uid:abd05ea7-6aa8-4e9c-88c6-bcdcbe922455,Namespace:kube-system,Attempt:0,}" May 17 00:09:16.866416 containerd[1606]: time="2025-05-17T00:09:16.866303262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rskwq,Uid:53edaf92-1744-4952-a607-778ec7857b1f,Namespace:kube-system,Attempt:0,}" May 17 00:09:16.889010 containerd[1606]: time="2025-05-17T00:09:16.888620299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:16.889545 containerd[1606]: time="2025-05-17T00:09:16.889291984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:16.889601 kubelet[2915]: I0517 00:09:16.889400 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpqv\" (UniqueName: \"kubernetes.io/projected/f01e8606-a9c3-44a3-8210-a793ea6c6538-kube-api-access-jtpqv\") pod \"cilium-operator-5d85765b45-6hbcf\" (UID: \"f01e8606-a9c3-44a3-8210-a793ea6c6538\") " pod="kube-system/cilium-operator-5d85765b45-6hbcf" May 17 00:09:16.889601 kubelet[2915]: I0517 00:09:16.889446 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f01e8606-a9c3-44a3-8210-a793ea6c6538-cilium-config-path\") pod \"cilium-operator-5d85765b45-6hbcf\" (UID: \"f01e8606-a9c3-44a3-8210-a793ea6c6538\") " pod="kube-system/cilium-operator-5d85765b45-6hbcf" May 17 00:09:16.893274 containerd[1606]: time="2025-05-17T00:09:16.892370206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:16.893274 containerd[1606]: time="2025-05-17T00:09:16.892552247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:16.907035 containerd[1606]: time="2025-05-17T00:09:16.906816588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:16.907035 containerd[1606]: time="2025-05-17T00:09:16.906953749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:16.907035 containerd[1606]: time="2025-05-17T00:09:16.906993069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:16.907423 containerd[1606]: time="2025-05-17T00:09:16.907147870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:16.947080 containerd[1606]: time="2025-05-17T00:09:16.946979630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9fc5n,Uid:abd05ea7-6aa8-4e9c-88c6-bcdcbe922455,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cfad9245a875f023cbe0286681d61d15264ac96833b84f900d3fa788fc9488d\"" May 17 00:09:16.953298 containerd[1606]: time="2025-05-17T00:09:16.953231514Z" level=info msg="CreateContainer within sandbox \"8cfad9245a875f023cbe0286681d61d15264ac96833b84f900d3fa788fc9488d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:09:16.965036 containerd[1606]: time="2025-05-17T00:09:16.964866916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rskwq,Uid:53edaf92-1744-4952-a607-778ec7857b1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\"" May 17 00:09:16.968041 containerd[1606]: time="2025-05-17T00:09:16.966724329Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:09:16.973955 containerd[1606]: time="2025-05-17T00:09:16.973699979Z" level=info msg="CreateContainer within sandbox \"8cfad9245a875f023cbe0286681d61d15264ac96833b84f900d3fa788fc9488d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"745e9e2342c68116f18736420ccf5e603849758a5b5c8abd9dc188acf52dd2e3\"" May 17 00:09:16.975278 containerd[1606]: time="2025-05-17T00:09:16.974322743Z" level=info msg="StartContainer for \"745e9e2342c68116f18736420ccf5e603849758a5b5c8abd9dc188acf52dd2e3\"" May 17 00:09:17.043292 containerd[1606]: time="2025-05-17T00:09:17.042066224Z" level=info msg="StartContainer for \"745e9e2342c68116f18736420ccf5e603849758a5b5c8abd9dc188acf52dd2e3\" returns successfully" May 17 00:09:17.108617 containerd[1606]: time="2025-05-17T00:09:17.107725172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6hbcf,Uid:f01e8606-a9c3-44a3-8210-a793ea6c6538,Namespace:kube-system,Attempt:0,}" May 17 00:09:17.138493 containerd[1606]: time="2025-05-17T00:09:17.138388031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:17.138603 containerd[1606]: time="2025-05-17T00:09:17.138504192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:17.138653 containerd[1606]: time="2025-05-17T00:09:17.138601552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:17.141287 containerd[1606]: time="2025-05-17T00:09:17.138757874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:17.205732 containerd[1606]: time="2025-05-17T00:09:17.205620671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6hbcf,Uid:f01e8606-a9c3-44a3-8210-a793ea6c6538,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\"" May 17 00:09:17.743967 kubelet[2915]: I0517 00:09:17.743897 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9fc5n" podStartSLOduration=1.743876031 podStartE2EDuration="1.743876031s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:17.743021705 +0000 UTC m=+6.207719980" watchObservedRunningTime="2025-05-17 00:09:17.743876031 +0000 UTC m=+6.208574306" May 17 00:09:21.818999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539781399.mount: Deactivated successfully. May 17 00:09:23.272454 containerd[1606]: time="2025-05-17T00:09:23.272364078Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:23.274148 containerd[1606]: time="2025-05-17T00:09:23.274078291Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:09:23.276648 containerd[1606]: time="2025-05-17T00:09:23.275559822Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:23.277307 containerd[1606]: time="2025-05-17T00:09:23.277270715Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.310460305s" May 17 00:09:23.277307 containerd[1606]: time="2025-05-17T00:09:23.277305716Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:09:23.280569 containerd[1606]: time="2025-05-17T00:09:23.280533100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:09:23.281950 containerd[1606]: time="2025-05-17T00:09:23.281766270Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:09:23.305292 containerd[1606]: time="2025-05-17T00:09:23.304861406Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\"" May 17 00:09:23.307282 containerd[1606]: time="2025-05-17T00:09:23.306515099Z" level=info msg="StartContainer for \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\"" May 17 00:09:23.367141 containerd[1606]: time="2025-05-17T00:09:23.367098682Z" level=info msg="StartContainer for \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\" returns successfully" May 17 00:09:23.490177 containerd[1606]: time="2025-05-17T00:09:23.490116301Z" level=info msg="shim disconnected" id=b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64 namespace=k8s.io May 17 00:09:23.490177 containerd[1606]: time="2025-05-17T00:09:23.490163782Z" level=warning msg="cleaning up after shim disconnected" id=b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64 namespace=k8s.io May 17 00:09:23.490177 containerd[1606]: time="2025-05-17T00:09:23.490173502Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:23.752746 containerd[1606]: time="2025-05-17T00:09:23.752633386Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:09:23.783256 containerd[1606]: time="2025-05-17T00:09:23.783183460Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\"" May 17 00:09:23.785559 containerd[1606]: time="2025-05-17T00:09:23.785524598Z" level=info msg="StartContainer for \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\"" May 17 00:09:23.861914 containerd[1606]: time="2025-05-17T00:09:23.861796220Z" level=info msg="StartContainer for \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\" returns successfully" May 17 00:09:23.872446 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:09:23.872773 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:09:23.872841 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:09:23.882648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:09:23.897257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:09:23.904670 containerd[1606]: time="2025-05-17T00:09:23.904610107Z" level=info msg="shim disconnected" id=850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c namespace=k8s.io May 17 00:09:23.904980 containerd[1606]: time="2025-05-17T00:09:23.904802869Z" level=warning msg="cleaning up after shim disconnected" id=850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c namespace=k8s.io May 17 00:09:23.904980 containerd[1606]: time="2025-05-17T00:09:23.904816789Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:24.295273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64-rootfs.mount: Deactivated successfully. May 17 00:09:24.757349 containerd[1606]: time="2025-05-17T00:09:24.756915714Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:09:24.779813 containerd[1606]: time="2025-05-17T00:09:24.779679850Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\"" May 17 00:09:24.782705 containerd[1606]: time="2025-05-17T00:09:24.781196941Z" level=info msg="StartContainer for \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\"" May 17 00:09:24.833319 containerd[1606]: time="2025-05-17T00:09:24.833263063Z" level=info msg="StartContainer for \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\" returns successfully" May 17 00:09:24.871216 containerd[1606]: time="2025-05-17T00:09:24.871137515Z" level=info msg="shim disconnected" id=e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23 namespace=k8s.io May 17 00:09:24.871583 containerd[1606]: time="2025-05-17T00:09:24.871550678Z" level=warning msg="cleaning up after shim disconnected" id=e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23 namespace=k8s.io May 17 00:09:24.871681 containerd[1606]: time="2025-05-17T00:09:24.871664039Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:25.294528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23-rootfs.mount: Deactivated successfully. May 17 00:09:25.761223 containerd[1606]: time="2025-05-17T00:09:25.761176996Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:09:25.785352 containerd[1606]: time="2025-05-17T00:09:25.785022982Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\"" May 17 00:09:25.787338 containerd[1606]: time="2025-05-17T00:09:25.787282559Z" level=info msg="StartContainer for \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\"" May 17 00:09:25.842763 containerd[1606]: time="2025-05-17T00:09:25.842079346Z" level=info msg="StartContainer for \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\" returns successfully" May 17 00:09:25.865695 containerd[1606]: time="2025-05-17T00:09:25.865480128Z" level=info msg="shim disconnected" id=845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19 namespace=k8s.io May 17 00:09:25.865695 containerd[1606]: time="2025-05-17T00:09:25.865544528Z" level=warning msg="cleaning up after shim disconnected" id=845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19 namespace=k8s.io May 17 00:09:25.865695 containerd[1606]: time="2025-05-17T00:09:25.865552529Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:09:26.293917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19-rootfs.mount: Deactivated successfully. May 17 00:09:26.766939 containerd[1606]: time="2025-05-17T00:09:26.766839880Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:09:26.788328 containerd[1606]: time="2025-05-17T00:09:26.788237088Z" level=info msg="CreateContainer within sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\"" May 17 00:09:26.789259 containerd[1606]: time="2025-05-17T00:09:26.789195696Z" level=info msg="StartContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\"" May 17 00:09:26.849083 containerd[1606]: time="2025-05-17T00:09:26.848974205Z" level=info msg="StartContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" returns successfully" May 17 00:09:26.936958 kubelet[2915]: I0517 00:09:26.936657 2915 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:09:27.061347 kubelet[2915]: I0517 00:09:27.061094 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf029667-2ba9-4dba-8e8a-7ab9a065a6cc-config-volume\") pod \"coredns-7c65d6cfc9-72nxb\" (UID: \"cf029667-2ba9-4dba-8e8a-7ab9a065a6cc\") " pod="kube-system/coredns-7c65d6cfc9-72nxb" May 17 00:09:27.061347 kubelet[2915]: I0517 00:09:27.061152 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0894e078-6cca-4d65-84b8-7a7aded63c5e-config-volume\") pod \"coredns-7c65d6cfc9-zd8lp\" (UID: \"0894e078-6cca-4d65-84b8-7a7aded63c5e\") " pod="kube-system/coredns-7c65d6cfc9-zd8lp" May 17 00:09:27.061347 kubelet[2915]: I0517 00:09:27.061172 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbbl\" (UniqueName: \"kubernetes.io/projected/0894e078-6cca-4d65-84b8-7a7aded63c5e-kube-api-access-dzbbl\") pod \"coredns-7c65d6cfc9-zd8lp\" (UID: \"0894e078-6cca-4d65-84b8-7a7aded63c5e\") " pod="kube-system/coredns-7c65d6cfc9-zd8lp" May 17 00:09:27.061347 kubelet[2915]: I0517 00:09:27.061194 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5qb\" (UniqueName: \"kubernetes.io/projected/cf029667-2ba9-4dba-8e8a-7ab9a065a6cc-kube-api-access-bl5qb\") pod \"coredns-7c65d6cfc9-72nxb\" (UID: \"cf029667-2ba9-4dba-8e8a-7ab9a065a6cc\") " pod="kube-system/coredns-7c65d6cfc9-72nxb" May 17 00:09:27.284700 containerd[1606]: time="2025-05-17T00:09:27.284663848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-72nxb,Uid:cf029667-2ba9-4dba-8e8a-7ab9a065a6cc,Namespace:kube-system,Attempt:0,}" May 17 00:09:27.288448 containerd[1606]: time="2025-05-17T00:09:27.288233196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zd8lp,Uid:0894e078-6cca-4d65-84b8-7a7aded63c5e,Namespace:kube-system,Attempt:0,}" May 17 00:09:27.793133 kubelet[2915]: I0517 00:09:27.793046 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rskwq" podStartSLOduration=5.4805498759999995 podStartE2EDuration="11.793021877s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="2025-05-17 00:09:16.966277806 +0000 UTC m=+5.430976081" lastFinishedPulling="2025-05-17 00:09:23.278749807 +0000 UTC m=+11.743448082" observedRunningTime="2025-05-17 00:09:27.791133782 +0000 UTC m=+16.255832177" watchObservedRunningTime="2025-05-17 00:09:27.793021877 +0000 UTC m=+16.257720152" May 17 00:09:32.051680 containerd[1606]: time="2025-05-17T00:09:32.051616001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:32.054089 containerd[1606]: time="2025-05-17T00:09:32.053705018Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 17 00:09:32.055160 containerd[1606]: time="2025-05-17T00:09:32.055088309Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:09:32.056816 containerd[1606]: time="2025-05-17T00:09:32.056649122Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 8.77585214s" May 17 00:09:32.056816 containerd[1606]: time="2025-05-17T00:09:32.056706043Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:09:32.060843 containerd[1606]: time="2025-05-17T00:09:32.060807197Z" level=info msg="CreateContainer within sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:09:32.073874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675059329.mount: Deactivated successfully. May 17 00:09:32.076193 containerd[1606]: time="2025-05-17T00:09:32.076141523Z" level=info msg="CreateContainer within sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\"" May 17 00:09:32.077487 containerd[1606]: time="2025-05-17T00:09:32.076765528Z" level=info msg="StartContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\"" May 17 00:09:32.104840 systemd[1]: run-containerd-runc-k8s.io-686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4-runc.F7dZON.mount: Deactivated successfully. May 17 00:09:32.136878 containerd[1606]: time="2025-05-17T00:09:32.136829463Z" level=info msg="StartContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" returns successfully" May 17 00:09:35.984857 systemd-networkd[1246]: cilium_host: Link UP May 17 00:09:35.985037 systemd-networkd[1246]: cilium_net: Link UP May 17 00:09:35.985210 systemd-networkd[1246]: cilium_net: Gained carrier May 17 00:09:35.989959 systemd-networkd[1246]: cilium_host: Gained carrier May 17 00:09:36.107006 systemd-networkd[1246]: cilium_vxlan: Link UP May 17 00:09:36.107014 systemd-networkd[1246]: cilium_vxlan: Gained carrier May 17 00:09:36.390419 systemd-networkd[1246]: cilium_net: Gained IPv6LL May 17 00:09:36.399408 kernel: NET: Registered PF_ALG protocol family May 17 00:09:36.430612 systemd-networkd[1246]: cilium_host: Gained IPv6LL May 17 00:09:37.126335 systemd-networkd[1246]: lxc_health: Link UP May 17 00:09:37.132108 systemd-networkd[1246]: lxc_health: Gained carrier May 17 00:09:37.379298 systemd-networkd[1246]: lxcd2575bdde8aa: Link UP May 17 00:09:37.383276 kernel: eth0: renamed from tmp30992 May 17 00:09:37.394404 systemd-networkd[1246]: lxcd2575bdde8aa: Gained carrier May 17 00:09:37.414968 systemd-networkd[1246]: lxc98b7ef9b82a0: Link UP May 17 00:09:37.421409 kernel: eth0: renamed from tmpc074a May 17 00:09:37.427797 systemd-networkd[1246]: lxc98b7ef9b82a0: Gained carrier May 17 00:09:37.926403 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL May 17 00:09:38.822439 systemd-networkd[1246]: lxc_health: Gained IPv6LL May 17 00:09:38.894528 kubelet[2915]: I0517 00:09:38.894443 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6hbcf" podStartSLOduration=8.044297237 podStartE2EDuration="22.894424844s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="2025-05-17 00:09:17.207884567 +0000 UTC m=+5.672582802" lastFinishedPulling="2025-05-17 00:09:32.058012134 +0000 UTC m=+20.522710409" observedRunningTime="2025-05-17 00:09:32.804613042 +0000 UTC m=+21.269311277" watchObservedRunningTime="2025-05-17 00:09:38.894424844 +0000 UTC m=+27.359123119" May 17 00:09:39.334624 systemd-networkd[1246]: lxcd2575bdde8aa: Gained IPv6LL May 17 00:09:39.398450 systemd-networkd[1246]: lxc98b7ef9b82a0: Gained IPv6LL May 17 00:09:41.567730 containerd[1606]: time="2025-05-17T00:09:41.567621231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:41.569158 containerd[1606]: time="2025-05-17T00:09:41.568790121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:41.569682 containerd[1606]: time="2025-05-17T00:09:41.569296245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:41.569682 containerd[1606]: time="2025-05-17T00:09:41.569627448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:41.571784 containerd[1606]: time="2025-05-17T00:09:41.571550665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:09:41.571784 containerd[1606]: time="2025-05-17T00:09:41.571613385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:09:41.571784 containerd[1606]: time="2025-05-17T00:09:41.571628505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:41.571784 containerd[1606]: time="2025-05-17T00:09:41.571711906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:09:41.648334 containerd[1606]: time="2025-05-17T00:09:41.647735566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zd8lp,Uid:0894e078-6cca-4d65-84b8-7a7aded63c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c074a0ff4720c5c7b3508221939585424ce71dd8da9f011cce3d122529533c30\"" May 17 00:09:41.652947 containerd[1606]: time="2025-05-17T00:09:41.652889211Z" level=info msg="CreateContainer within sandbox \"c074a0ff4720c5c7b3508221939585424ce71dd8da9f011cce3d122529533c30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:09:41.686108 containerd[1606]: time="2025-05-17T00:09:41.685662496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-72nxb,Uid:cf029667-2ba9-4dba-8e8a-7ab9a065a6cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"30992eb41931489f0557e3115df5834c7c9966a2bace9e7ead92b7f9d5c87db5\"" May 17 00:09:41.688033 containerd[1606]: time="2025-05-17T00:09:41.687436551Z" level=info msg="CreateContainer within sandbox \"c074a0ff4720c5c7b3508221939585424ce71dd8da9f011cce3d122529533c30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6267202b6a5d80a3e5ab15df50fafe21f5cff6319115f3799da6643cfe2bc93\"" May 17 00:09:41.689927 containerd[1606]: time="2025-05-17T00:09:41.689413048Z" level=info msg="StartContainer for \"d6267202b6a5d80a3e5ab15df50fafe21f5cff6319115f3799da6643cfe2bc93\"" May 17 00:09:41.701435 containerd[1606]: time="2025-05-17T00:09:41.698887371Z" level=info msg="CreateContainer within sandbox \"30992eb41931489f0557e3115df5834c7c9966a2bace9e7ead92b7f9d5c87db5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:09:41.719453 containerd[1606]: time="2025-05-17T00:09:41.719406429Z" level=info msg="CreateContainer within sandbox \"30992eb41931489f0557e3115df5834c7c9966a2bace9e7ead92b7f9d5c87db5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a94389eac8cc15f8f8f85a3cb57d02e3bde14bd83928b99d82f8adff7e31d7ab\"" May 17 00:09:41.721141 containerd[1606]: time="2025-05-17T00:09:41.721015523Z" level=info msg="StartContainer for \"a94389eac8cc15f8f8f85a3cb57d02e3bde14bd83928b99d82f8adff7e31d7ab\"" May 17 00:09:41.776972 containerd[1606]: time="2025-05-17T00:09:41.776235522Z" level=info msg="StartContainer for \"d6267202b6a5d80a3e5ab15df50fafe21f5cff6319115f3799da6643cfe2bc93\" returns successfully" May 17 00:09:41.814681 containerd[1606]: time="2025-05-17T00:09:41.814546495Z" level=info msg="StartContainer for \"a94389eac8cc15f8f8f85a3cb57d02e3bde14bd83928b99d82f8adff7e31d7ab\" returns successfully" May 17 00:09:41.858377 kubelet[2915]: I0517 00:09:41.857002 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zd8lp" podStartSLOduration=25.856984343 podStartE2EDuration="25.856984343s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:41.854640923 +0000 UTC m=+30.319339198" watchObservedRunningTime="2025-05-17 00:09:41.856984343 +0000 UTC m=+30.321682618" May 17 00:09:42.577501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480267081.mount: Deactivated successfully. May 17 00:09:42.862710 kubelet[2915]: I0517 00:09:42.861597 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-72nxb" podStartSLOduration=26.861558583 podStartE2EDuration="26.861558583s" podCreationTimestamp="2025-05-17 00:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:09:41.922387471 +0000 UTC m=+30.387085746" watchObservedRunningTime="2025-05-17 00:09:42.861558583 +0000 UTC m=+31.326256858" May 17 00:09:43.099462 kubelet[2915]: I0517 00:09:43.099402 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.916356 1581 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.916405 1581 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.916667 1581 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917059 1581 omaha_request_params.cc:62] Current group set to lts May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917158 1581 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917167 1581 update_attempter.cc:643] Scheduling an action processor start. May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917183 1581 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917208 1581 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917277 1581 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917287 1581 omaha_request_action.cc:272] Request: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: May 17 00:10:16.917551 update_engine[1581]: I20250517 00:10:16.917294 1581 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:10:16.920196 update_engine[1581]: I20250517 00:10:16.919023 1581 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:10:16.920196 update_engine[1581]: I20250517 00:10:16.919420 1581 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:10:16.920291 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:10:16.923498 update_engine[1581]: E20250517 00:10:16.923436 1581 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:10:16.923962 update_engine[1581]: I20250517 00:10:16.923547 1581 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:10:26.855509 update_engine[1581]: I20250517 00:10:26.855310 1581 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:10:26.856162 update_engine[1581]: I20250517 00:10:26.855607 1581 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:10:26.856162 update_engine[1581]: I20250517 00:10:26.855881 1581 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:10:26.856839 update_engine[1581]: E20250517 00:10:26.856781 1581 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:10:26.856915 update_engine[1581]: I20250517 00:10:26.856864 1581 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:10:36.861783 update_engine[1581]: I20250517 00:10:36.860841 1581 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:10:36.861783 update_engine[1581]: I20250517 00:10:36.861214 1581 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:10:36.861783 update_engine[1581]: I20250517 00:10:36.861562 1581 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:10:36.863143 update_engine[1581]: E20250517 00:10:36.863009 1581 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:10:36.863224 update_engine[1581]: I20250517 00:10:36.863152 1581 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:10:46.865024 update_engine[1581]: I20250517 00:10:46.864918 1581 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:10:46.865477 update_engine[1581]: I20250517 00:10:46.865331 1581 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:10:46.865857 update_engine[1581]: I20250517 00:10:46.865794 1581 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:10:46.866495 update_engine[1581]: E20250517 00:10:46.866429 1581 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866530 1581 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866549 1581 omaha_request_action.cc:617] Omaha request response: May 17 00:10:46.866842 update_engine[1581]: E20250517 00:10:46.866695 1581 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866747 1581 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866760 1581 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866770 1581 update_attempter.cc:306] Processing Done. May 17 00:10:46.866842 update_engine[1581]: E20250517 00:10:46.866792 1581 update_attempter.cc:619] Update failed. May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866803 1581 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866813 1581 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:10:46.866842 update_engine[1581]: I20250517 00:10:46.866824 1581 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:10:46.867368 update_engine[1581]: I20250517 00:10:46.866935 1581 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:10:46.867368 update_engine[1581]: I20250517 00:10:46.866973 1581 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:10:46.867368 update_engine[1581]: I20250517 00:10:46.866986 1581 omaha_request_action.cc:272] Request: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: May 17 00:10:46.867368 update_engine[1581]: I20250517 00:10:46.866998 1581 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:10:46.867368 update_engine[1581]: I20250517 00:10:46.867307 1581 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:10:46.867991 update_engine[1581]: I20250517 00:10:46.867642 1581 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:10:46.868647 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:10:46.869971 locksmithd[1620]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:10:46.870101 update_engine[1581]: E20250517 00:10:46.869410 1581 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869498 1581 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869513 1581 omaha_request_action.cc:617] Omaha request response: May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869525 1581 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869537 1581 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869546 1581 update_attempter.cc:306] Processing Done. May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869558 1581 update_attempter.cc:310] Error event sent. May 17 00:10:46.870101 update_engine[1581]: I20250517 00:10:46.869575 1581 update_check_scheduler.cc:74] Next update check in 49m11s May 17 00:13:49.856648 systemd[1]: Started sshd@7-49.12.39.64:22-139.178.68.195:36218.service - OpenSSH per-connection server daemon (139.178.68.195:36218). May 17 00:13:50.841375 sshd[4329]: Accepted publickey for core from 139.178.68.195 port 36218 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:50.843260 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:50.849134 systemd-logind[1575]: New session 8 of user core. May 17 00:13:50.852685 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:13:51.623897 sshd[4329]: pam_unix(sshd:session): session closed for user core May 17 00:13:51.629201 systemd[1]: sshd@7-49.12.39.64:22-139.178.68.195:36218.service: Deactivated successfully. May 17 00:13:51.632108 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. May 17 00:13:51.632541 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:13:51.634159 systemd-logind[1575]: Removed session 8. May 17 00:13:56.798674 systemd[1]: Started sshd@8-49.12.39.64:22-139.178.68.195:34506.service - OpenSSH per-connection server daemon (139.178.68.195:34506). May 17 00:13:57.803941 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 34506 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:57.806741 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:57.813941 systemd-logind[1575]: New session 9 of user core. May 17 00:13:57.821746 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:13:58.575616 sshd[4344]: pam_unix(sshd:session): session closed for user core May 17 00:13:58.580798 systemd[1]: sshd@8-49.12.39.64:22-139.178.68.195:34506.service: Deactivated successfully. May 17 00:13:58.583837 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. May 17 00:13:58.584701 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:13:58.586873 systemd-logind[1575]: Removed session 9. May 17 00:14:03.744813 systemd[1]: Started sshd@9-49.12.39.64:22-139.178.68.195:50446.service - OpenSSH per-connection server daemon (139.178.68.195:50446). May 17 00:14:04.737952 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 50446 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:04.740954 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:04.746621 systemd-logind[1575]: New session 10 of user core. May 17 00:14:04.754770 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:14:05.513651 sshd[4359]: pam_unix(sshd:session): session closed for user core May 17 00:14:05.517397 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. May 17 00:14:05.518053 systemd[1]: sshd@9-49.12.39.64:22-139.178.68.195:50446.service: Deactivated successfully. May 17 00:14:05.523576 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:14:05.526662 systemd-logind[1575]: Removed session 10. May 17 00:14:05.693350 systemd[1]: Started sshd@10-49.12.39.64:22-139.178.68.195:50456.service - OpenSSH per-connection server daemon (139.178.68.195:50456). May 17 00:14:06.693901 sshd[4374]: Accepted publickey for core from 139.178.68.195 port 50456 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:06.696238 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:06.703538 systemd-logind[1575]: New session 11 of user core. May 17 00:14:06.707955 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:14:07.527547 sshd[4374]: pam_unix(sshd:session): session closed for user core May 17 00:14:07.533017 systemd[1]: sshd@10-49.12.39.64:22-139.178.68.195:50456.service: Deactivated successfully. May 17 00:14:07.538670 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:14:07.540016 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. May 17 00:14:07.540956 systemd-logind[1575]: Removed session 11. May 17 00:14:07.699735 systemd[1]: Started sshd@11-49.12.39.64:22-139.178.68.195:50458.service - OpenSSH per-connection server daemon (139.178.68.195:50458). May 17 00:14:08.702274 sshd[4385]: Accepted publickey for core from 139.178.68.195 port 50458 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:08.704671 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:08.710932 systemd-logind[1575]: New session 12 of user core. May 17 00:14:08.721007 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:14:09.480625 sshd[4385]: pam_unix(sshd:session): session closed for user core May 17 00:14:09.486532 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. May 17 00:14:09.487733 systemd[1]: sshd@11-49.12.39.64:22-139.178.68.195:50458.service: Deactivated successfully. May 17 00:14:09.493142 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:14:09.495809 systemd-logind[1575]: Removed session 12. May 17 00:14:14.646832 systemd[1]: Started sshd@12-49.12.39.64:22-139.178.68.195:59070.service - OpenSSH per-connection server daemon (139.178.68.195:59070). May 17 00:14:15.619550 sshd[4401]: Accepted publickey for core from 139.178.68.195 port 59070 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:15.621436 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:15.627350 systemd-logind[1575]: New session 13 of user core. May 17 00:14:15.635375 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:14:16.368898 sshd[4401]: pam_unix(sshd:session): session closed for user core May 17 00:14:16.379306 systemd[1]: sshd@12-49.12.39.64:22-139.178.68.195:59070.service: Deactivated successfully. May 17 00:14:16.383472 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:14:16.383699 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. May 17 00:14:16.386059 systemd-logind[1575]: Removed session 13. May 17 00:14:16.540740 systemd[1]: Started sshd@13-49.12.39.64:22-139.178.68.195:59076.service - OpenSSH per-connection server daemon (139.178.68.195:59076). May 17 00:14:17.512801 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 59076 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:17.515308 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:17.520326 systemd-logind[1575]: New session 14 of user core. May 17 00:14:17.529712 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:14:18.309455 sshd[4414]: pam_unix(sshd:session): session closed for user core May 17 00:14:18.315631 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. May 17 00:14:18.315666 systemd[1]: sshd@13-49.12.39.64:22-139.178.68.195:59076.service: Deactivated successfully. May 17 00:14:18.321297 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:14:18.324457 systemd-logind[1575]: Removed session 14. May 17 00:14:18.491179 systemd[1]: Started sshd@14-49.12.39.64:22-139.178.68.195:59080.service - OpenSSH per-connection server daemon (139.178.68.195:59080). May 17 00:14:19.496120 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 59080 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:19.498642 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:19.505513 systemd-logind[1575]: New session 15 of user core. May 17 00:14:19.512592 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:14:21.883541 sshd[4428]: pam_unix(sshd:session): session closed for user core May 17 00:14:21.889629 systemd[1]: sshd@14-49.12.39.64:22-139.178.68.195:59080.service: Deactivated successfully. May 17 00:14:21.894438 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:14:21.897016 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. May 17 00:14:21.900849 systemd-logind[1575]: Removed session 15. May 17 00:14:22.051778 systemd[1]: Started sshd@15-49.12.39.64:22-139.178.68.195:59094.service - OpenSSH per-connection server daemon (139.178.68.195:59094). May 17 00:14:23.042390 sshd[4447]: Accepted publickey for core from 139.178.68.195 port 59094 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:23.044304 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:23.049529 systemd-logind[1575]: New session 16 of user core. May 17 00:14:23.059051 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:14:23.930695 sshd[4447]: pam_unix(sshd:session): session closed for user core May 17 00:14:23.935758 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. May 17 00:14:23.936852 systemd[1]: sshd@15-49.12.39.64:22-139.178.68.195:59094.service: Deactivated successfully. May 17 00:14:23.940139 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:14:23.941617 systemd-logind[1575]: Removed session 16. May 17 00:14:24.095550 systemd[1]: Started sshd@16-49.12.39.64:22-139.178.68.195:48382.service - OpenSSH per-connection server daemon (139.178.68.195:48382). May 17 00:14:25.083093 sshd[4458]: Accepted publickey for core from 139.178.68.195 port 48382 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:25.085441 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:25.090861 systemd-logind[1575]: New session 17 of user core. May 17 00:14:25.096813 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:14:25.843566 sshd[4458]: pam_unix(sshd:session): session closed for user core May 17 00:14:25.848988 systemd[1]: sshd@16-49.12.39.64:22-139.178.68.195:48382.service: Deactivated successfully. May 17 00:14:25.849684 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. May 17 00:14:25.855440 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:14:25.856994 systemd-logind[1575]: Removed session 17. May 17 00:14:31.026740 systemd[1]: Started sshd@17-49.12.39.64:22-139.178.68.195:48398.service - OpenSSH per-connection server daemon (139.178.68.195:48398). May 17 00:14:32.052803 sshd[4475]: Accepted publickey for core from 139.178.68.195 port 48398 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:32.055015 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:32.063186 systemd-logind[1575]: New session 18 of user core. May 17 00:14:32.068703 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:14:32.818861 sshd[4475]: pam_unix(sshd:session): session closed for user core May 17 00:14:32.824692 systemd[1]: sshd@17-49.12.39.64:22-139.178.68.195:48398.service: Deactivated successfully. May 17 00:14:32.832199 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:14:32.834387 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. May 17 00:14:32.836545 systemd-logind[1575]: Removed session 18. May 17 00:14:37.990574 systemd[1]: Started sshd@18-49.12.39.64:22-139.178.68.195:33128.service - OpenSSH per-connection server daemon (139.178.68.195:33128). May 17 00:14:38.993779 sshd[4491]: Accepted publickey for core from 139.178.68.195 port 33128 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:38.996058 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:39.002735 systemd-logind[1575]: New session 19 of user core. May 17 00:14:39.011614 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:14:39.758050 sshd[4491]: pam_unix(sshd:session): session closed for user core May 17 00:14:39.764032 systemd[1]: sshd@18-49.12.39.64:22-139.178.68.195:33128.service: Deactivated successfully. May 17 00:14:39.768547 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:14:39.769773 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. May 17 00:14:39.770887 systemd-logind[1575]: Removed session 19. May 17 00:14:39.926752 systemd[1]: Started sshd@19-49.12.39.64:22-139.178.68.195:33142.service - OpenSSH per-connection server daemon (139.178.68.195:33142). May 17 00:14:40.916534 sshd[4505]: Accepted publickey for core from 139.178.68.195 port 33142 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:40.918746 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:40.924174 systemd-logind[1575]: New session 20 of user core. May 17 00:14:40.932238 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:14:43.100849 containerd[1606]: time="2025-05-17T00:14:43.100684200Z" level=info msg="StopContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" with timeout 30 (s)" May 17 00:14:43.103115 containerd[1606]: time="2025-05-17T00:14:43.102675178Z" level=info msg="Stop container \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" with signal terminated" May 17 00:14:43.129365 containerd[1606]: time="2025-05-17T00:14:43.129205985Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:14:43.133580 containerd[1606]: time="2025-05-17T00:14:43.133543706Z" level=info msg="StopContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" with timeout 2 (s)" May 17 00:14:43.134204 containerd[1606]: time="2025-05-17T00:14:43.134176432Z" level=info msg="Stop container \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" with signal terminated" May 17 00:14:43.142389 systemd-networkd[1246]: lxc_health: Link DOWN May 17 00:14:43.143120 systemd-networkd[1246]: lxc_health: Lost carrier May 17 00:14:43.171921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4-rootfs.mount: Deactivated successfully. May 17 00:14:43.188092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009-rootfs.mount: Deactivated successfully. May 17 00:14:43.193275 containerd[1606]: time="2025-05-17T00:14:43.193116460Z" level=info msg="shim disconnected" id=90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009 namespace=k8s.io May 17 00:14:43.193275 containerd[1606]: time="2025-05-17T00:14:43.193167901Z" level=warning msg="cleaning up after shim disconnected" id=90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009 namespace=k8s.io May 17 00:14:43.193275 containerd[1606]: time="2025-05-17T00:14:43.193175661Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:43.193597 containerd[1606]: time="2025-05-17T00:14:43.193411463Z" level=info msg="shim disconnected" id=686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4 namespace=k8s.io May 17 00:14:43.193597 containerd[1606]: time="2025-05-17T00:14:43.193553064Z" level=warning msg="cleaning up after shim disconnected" id=686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4 namespace=k8s.io May 17 00:14:43.193597 containerd[1606]: time="2025-05-17T00:14:43.193564025Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:43.211363 containerd[1606]: time="2025-05-17T00:14:43.211317430Z" level=info msg="StopContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" returns successfully" May 17 00:14:43.211959 containerd[1606]: time="2025-05-17T00:14:43.211740914Z" level=info msg="StopContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" returns successfully" May 17 00:14:43.212388 containerd[1606]: time="2025-05-17T00:14:43.212234358Z" level=info msg="StopPodSandbox for \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\"" May 17 00:14:43.212388 containerd[1606]: time="2025-05-17T00:14:43.212350839Z" level=info msg="Container to stop \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.212388 containerd[1606]: time="2025-05-17T00:14:43.212364520Z" level=info msg="Container to stop \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.212694 containerd[1606]: time="2025-05-17T00:14:43.212508201Z" level=info msg="Container to stop \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.212694 containerd[1606]: time="2025-05-17T00:14:43.212530681Z" level=info msg="Container to stop \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.212694 containerd[1606]: time="2025-05-17T00:14:43.212541601Z" level=info msg="Container to stop \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.214977 containerd[1606]: time="2025-05-17T00:14:43.214916183Z" level=info msg="StopPodSandbox for \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\"" May 17 00:14:43.217344 containerd[1606]: time="2025-05-17T00:14:43.215212986Z" level=info msg="Container to stop \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:14:43.216225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7-shm.mount: Deactivated successfully. May 17 00:14:43.219797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080-shm.mount: Deactivated successfully. May 17 00:14:43.277372 containerd[1606]: time="2025-05-17T00:14:43.277301604Z" level=info msg="shim disconnected" id=142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7 namespace=k8s.io May 17 00:14:43.277372 containerd[1606]: time="2025-05-17T00:14:43.277364805Z" level=warning msg="cleaning up after shim disconnected" id=142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7 namespace=k8s.io May 17 00:14:43.277372 containerd[1606]: time="2025-05-17T00:14:43.277372645Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:43.278714 containerd[1606]: time="2025-05-17T00:14:43.277924610Z" level=info msg="shim disconnected" id=a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080 namespace=k8s.io May 17 00:14:43.278714 containerd[1606]: time="2025-05-17T00:14:43.278719817Z" level=warning msg="cleaning up after shim disconnected" id=a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080 namespace=k8s.io May 17 00:14:43.278846 containerd[1606]: time="2025-05-17T00:14:43.278732018Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:43.294887 containerd[1606]: time="2025-05-17T00:14:43.294844648Z" level=info msg="TearDown network for sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" successfully" May 17 00:14:43.294887 containerd[1606]: time="2025-05-17T00:14:43.294882888Z" level=info msg="StopPodSandbox for \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" returns successfully" May 17 00:14:43.298162 containerd[1606]: time="2025-05-17T00:14:43.298112398Z" level=info msg="TearDown network for sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" successfully" May 17 00:14:43.298526 containerd[1606]: time="2025-05-17T00:14:43.298144838Z" level=info msg="StopPodSandbox for \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" returns successfully" May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353040 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-cgroup\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353126 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtpqv\" (UniqueName: \"kubernetes.io/projected/f01e8606-a9c3-44a3-8210-a793ea6c6538-kube-api-access-jtpqv\") pod \"f01e8606-a9c3-44a3-8210-a793ea6c6538\" (UID: \"f01e8606-a9c3-44a3-8210-a793ea6c6538\") " May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353166 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-etc-cni-netd\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353178 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353204 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53edaf92-1744-4952-a607-778ec7857b1f-clustermesh-secrets\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.353917 kubelet[2915]: I0517 00:14:43.353347 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-bpf-maps\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353398 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr8c9\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-kube-api-access-jr8c9\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353446 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-xtables-lock\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353484 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cni-path\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353519 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-run\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353558 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-kernel\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.354815 kubelet[2915]: I0517 00:14:43.353618 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f01e8606-a9c3-44a3-8210-a793ea6c6538-cilium-config-path\") pod \"f01e8606-a9c3-44a3-8210-a793ea6c6538\" (UID: \"f01e8606-a9c3-44a3-8210-a793ea6c6538\") " May 17 00:14:43.355151 kubelet[2915]: I0517 00:14:43.353687 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53edaf92-1744-4952-a607-778ec7857b1f-cilium-config-path\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.355151 kubelet[2915]: I0517 00:14:43.353768 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-hubble-tls\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.355151 kubelet[2915]: I0517 00:14:43.353805 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-lib-modules\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.355151 kubelet[2915]: I0517 00:14:43.353838 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-hostproc\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.355151 kubelet[2915]: I0517 00:14:43.353872 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-net\") pod \"53edaf92-1744-4952-a607-778ec7857b1f\" (UID: \"53edaf92-1744-4952-a607-778ec7857b1f\") " May 17 00:14:43.356435 kubelet[2915]: I0517 00:14:43.356402 2915 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-cgroup\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.356595 kubelet[2915]: I0517 00:14:43.356555 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360087 kubelet[2915]: I0517 00:14:43.360036 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360087 kubelet[2915]: I0517 00:14:43.360093 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360480 kubelet[2915]: I0517 00:14:43.360456 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360579 kubelet[2915]: I0517 00:14:43.360566 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cni-path" (OuterVolumeSpecName: "cni-path") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360648 kubelet[2915]: I0517 00:14:43.360635 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.360717 kubelet[2915]: I0517 00:14:43.360702 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.362692 kubelet[2915]: I0517 00:14:43.362662 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.363333 kubelet[2915]: I0517 00:14:43.362848 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-hostproc" (OuterVolumeSpecName: "hostproc") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:14:43.363622 kubelet[2915]: I0517 00:14:43.363595 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f01e8606-a9c3-44a3-8210-a793ea6c6538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f01e8606-a9c3-44a3-8210-a793ea6c6538" (UID: "f01e8606-a9c3-44a3-8210-a793ea6c6538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:14:43.363815 kubelet[2915]: I0517 00:14:43.363794 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-kube-api-access-jr8c9" (OuterVolumeSpecName: "kube-api-access-jr8c9") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "kube-api-access-jr8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:14:43.363907 kubelet[2915]: I0517 00:14:43.363865 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01e8606-a9c3-44a3-8210-a793ea6c6538-kube-api-access-jtpqv" (OuterVolumeSpecName: "kube-api-access-jtpqv") pod "f01e8606-a9c3-44a3-8210-a793ea6c6538" (UID: "f01e8606-a9c3-44a3-8210-a793ea6c6538"). InnerVolumeSpecName "kube-api-access-jtpqv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:14:43.364044 kubelet[2915]: I0517 00:14:43.363913 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53edaf92-1744-4952-a607-778ec7857b1f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:14:43.365204 kubelet[2915]: I0517 00:14:43.365165 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:14:43.365671 kubelet[2915]: I0517 00:14:43.365644 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53edaf92-1744-4952-a607-778ec7857b1f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53edaf92-1744-4952-a607-778ec7857b1f" (UID: "53edaf92-1744-4952-a607-778ec7857b1f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457703 2915 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-bpf-maps\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457769 2915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr8c9\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-kube-api-access-jr8c9\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457797 2915 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-xtables-lock\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457823 2915 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cni-path\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457844 2915 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-cilium-run\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457866 2915 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457889 2915 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f01e8606-a9c3-44a3-8210-a793ea6c6538-cilium-config-path\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458058 kubelet[2915]: I0517 00:14:43.457911 2915 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53edaf92-1744-4952-a607-778ec7857b1f-cilium-config-path\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.458719 kubelet[2915]: I0517 00:14:43.457931 2915 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53edaf92-1744-4952-a607-778ec7857b1f-hubble-tls\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.457997 2915 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-lib-modules\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.458848 2915 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-hostproc\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.458874 2915 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-host-proc-sys-net\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.458898 2915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtpqv\" (UniqueName: \"kubernetes.io/projected/f01e8606-a9c3-44a3-8210-a793ea6c6538-kube-api-access-jtpqv\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.458920 2915 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53edaf92-1744-4952-a607-778ec7857b1f-etc-cni-netd\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.459008 kubelet[2915]: I0517 00:14:43.458962 2915 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53edaf92-1744-4952-a607-778ec7857b1f-clustermesh-secrets\") on node \"ci-4081-3-3-n-bb4b333066\" DevicePath \"\"" May 17 00:14:43.592340 kubelet[2915]: I0517 00:14:43.592308 2915 scope.go:117] "RemoveContainer" containerID="686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4" May 17 00:14:43.598073 containerd[1606]: time="2025-05-17T00:14:43.597560746Z" level=info msg="RemoveContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\"" May 17 00:14:43.605552 containerd[1606]: time="2025-05-17T00:14:43.605198337Z" level=info msg="RemoveContainer for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" returns successfully" May 17 00:14:43.606431 kubelet[2915]: I0517 00:14:43.606338 2915 scope.go:117] "RemoveContainer" containerID="686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4" May 17 00:14:43.606862 containerd[1606]: time="2025-05-17T00:14:43.606720551Z" level=error msg="ContainerStatus for \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\": not found" May 17 00:14:43.607158 kubelet[2915]: E0517 00:14:43.606851 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\": not found" containerID="686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4" May 17 00:14:43.607158 kubelet[2915]: I0517 00:14:43.607001 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4"} err="failed to get container status \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\": rpc error: code = NotFound desc = an error occurred when try to find container \"686906f941189f687cb0daf5fa8085f89155293de45c5051bd4fb4edb478ede4\": not found" May 17 00:14:43.607158 kubelet[2915]: I0517 00:14:43.607089 2915 scope.go:117] "RemoveContainer" containerID="90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009" May 17 00:14:43.610104 containerd[1606]: time="2025-05-17T00:14:43.610066862Z" level=info msg="RemoveContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\"" May 17 00:14:43.615406 containerd[1606]: time="2025-05-17T00:14:43.615278111Z" level=info msg="RemoveContainer for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" returns successfully" May 17 00:14:43.616894 kubelet[2915]: I0517 00:14:43.616772 2915 scope.go:117] "RemoveContainer" containerID="845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19" May 17 00:14:43.618287 containerd[1606]: time="2025-05-17T00:14:43.618255859Z" level=info msg="RemoveContainer for \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\"" May 17 00:14:43.622997 containerd[1606]: time="2025-05-17T00:14:43.622915662Z" level=info msg="RemoveContainer for \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\" returns successfully" May 17 00:14:43.623450 kubelet[2915]: I0517 00:14:43.623354 2915 scope.go:117] "RemoveContainer" containerID="e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23" May 17 00:14:43.628652 containerd[1606]: time="2025-05-17T00:14:43.628577875Z" level=info msg="RemoveContainer for \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\"" May 17 00:14:43.633203 containerd[1606]: time="2025-05-17T00:14:43.633160318Z" level=info msg="RemoveContainer for \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\" returns successfully" May 17 00:14:43.633583 kubelet[2915]: I0517 00:14:43.633376 2915 scope.go:117] "RemoveContainer" containerID="850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c" May 17 00:14:43.634564 containerd[1606]: time="2025-05-17T00:14:43.634539090Z" level=info msg="RemoveContainer for \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\"" May 17 00:14:43.637706 containerd[1606]: time="2025-05-17T00:14:43.637675200Z" level=info msg="RemoveContainer for \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\" returns successfully" May 17 00:14:43.638007 kubelet[2915]: I0517 00:14:43.637859 2915 scope.go:117] "RemoveContainer" containerID="b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64" May 17 00:14:43.638975 containerd[1606]: time="2025-05-17T00:14:43.638951331Z" level=info msg="RemoveContainer for \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\"" May 17 00:14:43.641887 containerd[1606]: time="2025-05-17T00:14:43.641859439Z" level=info msg="RemoveContainer for \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\" returns successfully" May 17 00:14:43.642204 kubelet[2915]: I0517 00:14:43.642125 2915 scope.go:117] "RemoveContainer" containerID="90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009" May 17 00:14:43.642588 containerd[1606]: time="2025-05-17T00:14:43.642434964Z" level=error msg="ContainerStatus for \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\": not found" May 17 00:14:43.642788 kubelet[2915]: E0517 00:14:43.642569 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\": not found" containerID="90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009" May 17 00:14:43.642788 kubelet[2915]: I0517 00:14:43.642694 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009"} err="failed to get container status \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\": rpc error: code = NotFound desc = an error occurred when try to find container \"90984a78e17ca22ad3a702cc97e338673e12620d9fe11135d895e39f837e1009\": not found" May 17 00:14:43.642788 kubelet[2915]: I0517 00:14:43.642714 2915 scope.go:117] "RemoveContainer" containerID="845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19" May 17 00:14:43.643008 containerd[1606]: time="2025-05-17T00:14:43.642874928Z" level=error msg="ContainerStatus for \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\": not found" May 17 00:14:43.643096 kubelet[2915]: E0517 00:14:43.643038 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\": not found" containerID="845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19" May 17 00:14:43.643096 kubelet[2915]: I0517 00:14:43.643060 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19"} err="failed to get container status \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\": rpc error: code = NotFound desc = an error occurred when try to find container \"845c6dccb1f929ae6f51c89e6aa4f3ff08e0579073ccc6a1933bfc7af5c83b19\": not found" May 17 00:14:43.643096 kubelet[2915]: I0517 00:14:43.643075 2915 scope.go:117] "RemoveContainer" containerID="e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23" May 17 00:14:43.643354 containerd[1606]: time="2025-05-17T00:14:43.643263812Z" level=error msg="ContainerStatus for \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\": not found" May 17 00:14:43.643589 kubelet[2915]: E0517 00:14:43.643512 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\": not found" containerID="e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23" May 17 00:14:43.643589 kubelet[2915]: I0517 00:14:43.643532 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23"} err="failed to get container status \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6ca7bb4f454ab97c144a8896a2084a988f29f3cb32b22106797a7f8a42c9d23\": not found" May 17 00:14:43.643589 kubelet[2915]: I0517 00:14:43.643547 2915 scope.go:117] "RemoveContainer" containerID="850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c" May 17 00:14:43.643911 containerd[1606]: time="2025-05-17T00:14:43.643858817Z" level=error msg="ContainerStatus for \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\": not found" May 17 00:14:43.644236 kubelet[2915]: E0517 00:14:43.644073 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\": not found" containerID="850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c" May 17 00:14:43.644236 kubelet[2915]: I0517 00:14:43.644101 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c"} err="failed to get container status \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\": rpc error: code = NotFound desc = an error occurred when try to find container \"850478acb50a76a639d07f08e1897d4f4e75d1114a0f731f04da3597b050696c\": not found" May 17 00:14:43.644236 kubelet[2915]: I0517 00:14:43.644117 2915 scope.go:117] "RemoveContainer" containerID="b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64" May 17 00:14:43.644601 containerd[1606]: time="2025-05-17T00:14:43.644519023Z" level=error msg="ContainerStatus for \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\": not found" May 17 00:14:43.644685 kubelet[2915]: E0517 00:14:43.644632 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\": not found" containerID="b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64" May 17 00:14:43.644685 kubelet[2915]: I0517 00:14:43.644659 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64"} err="failed to get container status \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6370a0b05839387907df0387f29c471a69027189eae7f987b0117cfe469ca64\": not found" May 17 00:14:43.678673 kubelet[2915]: I0517 00:14:43.678626 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53edaf92-1744-4952-a607-778ec7857b1f" path="/var/lib/kubelet/pods/53edaf92-1744-4952-a607-778ec7857b1f/volumes" May 17 00:14:43.679202 kubelet[2915]: I0517 00:14:43.679184 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01e8606-a9c3-44a3-8210-a793ea6c6538" path="/var/lib/kubelet/pods/f01e8606-a9c3-44a3-8210-a793ea6c6538/volumes" May 17 00:14:44.110923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080-rootfs.mount: Deactivated successfully. May 17 00:14:44.111121 systemd[1]: var-lib-kubelet-pods-f01e8606\x2da9c3\x2d44a3\x2d8210\x2da793ea6c6538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djtpqv.mount: Deactivated successfully. May 17 00:14:44.111235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7-rootfs.mount: Deactivated successfully. May 17 00:14:44.111381 systemd[1]: var-lib-kubelet-pods-53edaf92\x2d1744\x2d4952\x2da607\x2d778ec7857b1f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djr8c9.mount: Deactivated successfully. May 17 00:14:44.111468 systemd[1]: var-lib-kubelet-pods-53edaf92\x2d1744\x2d4952\x2da607\x2d778ec7857b1f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:14:44.111561 systemd[1]: var-lib-kubelet-pods-53edaf92\x2d1744\x2d4952\x2da607\x2d778ec7857b1f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:14:45.195524 sshd[4505]: pam_unix(sshd:session): session closed for user core May 17 00:14:45.203861 systemd[1]: sshd@19-49.12.39.64:22-139.178.68.195:33142.service: Deactivated successfully. May 17 00:14:45.205491 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. May 17 00:14:45.208048 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:14:45.209849 systemd-logind[1575]: Removed session 20. May 17 00:14:45.357526 systemd[1]: Started sshd@20-49.12.39.64:22-139.178.68.195:41116.service - OpenSSH per-connection server daemon (139.178.68.195:41116). May 17 00:14:46.333435 sshd[4676]: Accepted publickey for core from 139.178.68.195 port 41116 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:46.335365 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:46.342278 systemd-logind[1575]: New session 21 of user core. May 17 00:14:46.351064 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:14:46.827996 kubelet[2915]: E0517 00:14:46.827822 2915 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:14:47.693403 kubelet[2915]: I0517 00:14:47.693333 2915 setters.go:600] "Node became not ready" node="ci-4081-3-3-n-bb4b333066" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:14:47Z","lastTransitionTime":"2025-05-17T00:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914307 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="mount-cgroup" May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914365 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="apply-sysctl-overwrites" May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914382 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="mount-bpf-fs" May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914396 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="clean-cilium-state" May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914409 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f01e8606-a9c3-44a3-8210-a793ea6c6538" containerName="cilium-operator" May 17 00:14:47.917276 kubelet[2915]: E0517 00:14:47.914423 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="cilium-agent" May 17 00:14:47.917276 kubelet[2915]: I0517 00:14:47.914511 2915 memory_manager.go:354] "RemoveStaleState removing state" podUID="53edaf92-1744-4952-a607-778ec7857b1f" containerName="cilium-agent" May 17 00:14:47.917276 kubelet[2915]: I0517 00:14:47.914533 2915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e8606-a9c3-44a3-8210-a793ea6c6538" containerName="cilium-operator" May 17 00:14:47.987405 kubelet[2915]: I0517 00:14:47.987191 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-cilium-cgroup\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.987405 kubelet[2915]: I0517 00:14:47.987233 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4d57c09-c259-4fbd-b232-f553a99b7b98-clustermesh-secrets\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.987405 kubelet[2915]: I0517 00:14:47.987278 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-host-proc-sys-net\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.987405 kubelet[2915]: I0517 00:14:47.987295 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-host-proc-sys-kernel\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.987980 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-cni-path\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.988013 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-lib-modules\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.988031 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-xtables-lock\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.988055 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-bpf-maps\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.988073 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4d57c09-c259-4fbd-b232-f553a99b7b98-cilium-ipsec-secrets\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988485 kubelet[2915]: I0517 00:14:47.988092 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4d57c09-c259-4fbd-b232-f553a99b7b98-hubble-tls\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988724 kubelet[2915]: I0517 00:14:47.988108 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-cilium-run\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988724 kubelet[2915]: I0517 00:14:47.988149 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-etc-cni-netd\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988724 kubelet[2915]: I0517 00:14:47.988172 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8c7\" (UniqueName: \"kubernetes.io/projected/b4d57c09-c259-4fbd-b232-f553a99b7b98-kube-api-access-kr8c7\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988724 kubelet[2915]: I0517 00:14:47.988197 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4d57c09-c259-4fbd-b232-f553a99b7b98-hostproc\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:47.988724 kubelet[2915]: I0517 00:14:47.988216 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4d57c09-c259-4fbd-b232-f553a99b7b98-cilium-config-path\") pod \"cilium-9grff\" (UID: \"b4d57c09-c259-4fbd-b232-f553a99b7b98\") " pod="kube-system/cilium-9grff" May 17 00:14:48.070853 sshd[4676]: pam_unix(sshd:session): session closed for user core May 17 00:14:48.076597 systemd[1]: sshd@20-49.12.39.64:22-139.178.68.195:41116.service: Deactivated successfully. May 17 00:14:48.081103 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:14:48.082558 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. May 17 00:14:48.083454 systemd-logind[1575]: Removed session 21. May 17 00:14:48.231689 containerd[1606]: time="2025-05-17T00:14:48.231014903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9grff,Uid:b4d57c09-c259-4fbd-b232-f553a99b7b98,Namespace:kube-system,Attempt:0,}" May 17 00:14:48.246646 systemd[1]: Started sshd@21-49.12.39.64:22-139.178.68.195:41118.service - OpenSSH per-connection server daemon (139.178.68.195:41118). May 17 00:14:48.269095 containerd[1606]: time="2025-05-17T00:14:48.268765535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:48.269095 containerd[1606]: time="2025-05-17T00:14:48.268833296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:48.269095 containerd[1606]: time="2025-05-17T00:14:48.268850976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:48.269095 containerd[1606]: time="2025-05-17T00:14:48.268979377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:48.320661 containerd[1606]: time="2025-05-17T00:14:48.320586418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9grff,Uid:b4d57c09-c259-4fbd-b232-f553a99b7b98,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\"" May 17 00:14:48.326044 containerd[1606]: time="2025-05-17T00:14:48.325995029Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:14:48.338392 containerd[1606]: time="2025-05-17T00:14:48.338322143Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98a15536ff91a0c93214485511df0f2f126cde8d8c0d83d148203e706547d399\"" May 17 00:14:48.340372 containerd[1606]: time="2025-05-17T00:14:48.340327962Z" level=info msg="StartContainer for \"98a15536ff91a0c93214485511df0f2f126cde8d8c0d83d148203e706547d399\"" May 17 00:14:48.402396 containerd[1606]: time="2025-05-17T00:14:48.402335580Z" level=info msg="StartContainer for \"98a15536ff91a0c93214485511df0f2f126cde8d8c0d83d148203e706547d399\" returns successfully" May 17 00:14:48.447133 containerd[1606]: time="2025-05-17T00:14:48.446863635Z" level=info msg="shim disconnected" id=98a15536ff91a0c93214485511df0f2f126cde8d8c0d83d148203e706547d399 namespace=k8s.io May 17 00:14:48.447133 containerd[1606]: time="2025-05-17T00:14:48.446981756Z" level=warning msg="cleaning up after shim disconnected" id=98a15536ff91a0c93214485511df0f2f126cde8d8c0d83d148203e706547d399 namespace=k8s.io May 17 00:14:48.447133 containerd[1606]: time="2025-05-17T00:14:48.446995236Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:48.630444 containerd[1606]: time="2025-05-17T00:14:48.630037502Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:14:48.654178 containerd[1606]: time="2025-05-17T00:14:48.653378839Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89da1d13f46eaf9c36d501addab2e8297816997f53282dc9a28888f8fa54ec8c\"" May 17 00:14:48.656938 containerd[1606]: time="2025-05-17T00:14:48.656500748Z" level=info msg="StartContainer for \"89da1d13f46eaf9c36d501addab2e8297816997f53282dc9a28888f8fa54ec8c\"" May 17 00:14:48.728289 containerd[1606]: time="2025-05-17T00:14:48.728142496Z" level=info msg="StartContainer for \"89da1d13f46eaf9c36d501addab2e8297816997f53282dc9a28888f8fa54ec8c\" returns successfully" May 17 00:14:48.765581 containerd[1606]: time="2025-05-17T00:14:48.765491564Z" level=info msg="shim disconnected" id=89da1d13f46eaf9c36d501addab2e8297816997f53282dc9a28888f8fa54ec8c namespace=k8s.io May 17 00:14:48.765581 containerd[1606]: time="2025-05-17T00:14:48.765568205Z" level=warning msg="cleaning up after shim disconnected" id=89da1d13f46eaf9c36d501addab2e8297816997f53282dc9a28888f8fa54ec8c namespace=k8s.io May 17 00:14:48.765581 containerd[1606]: time="2025-05-17T00:14:48.765578725Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:49.228557 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 41118 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:49.230730 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:49.236232 systemd-logind[1575]: New session 22 of user core. May 17 00:14:49.241915 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:14:49.638344 containerd[1606]: time="2025-05-17T00:14:49.638271641Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:14:49.660991 containerd[1606]: time="2025-05-17T00:14:49.660915998Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115\"" May 17 00:14:49.663266 containerd[1606]: time="2025-05-17T00:14:49.661600524Z" level=info msg="StartContainer for \"f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115\"" May 17 00:14:49.734454 containerd[1606]: time="2025-05-17T00:14:49.734392516Z" level=info msg="StartContainer for \"f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115\" returns successfully" May 17 00:14:49.766782 containerd[1606]: time="2025-05-17T00:14:49.766714756Z" level=info msg="shim disconnected" id=f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115 namespace=k8s.io May 17 00:14:49.766782 containerd[1606]: time="2025-05-17T00:14:49.766771757Z" level=warning msg="cleaning up after shim disconnected" id=f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115 namespace=k8s.io May 17 00:14:49.766782 containerd[1606]: time="2025-05-17T00:14:49.766780317Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:49.907701 sshd[4695]: pam_unix(sshd:session): session closed for user core May 17 00:14:49.912721 systemd[1]: sshd@21-49.12.39.64:22-139.178.68.195:41118.service: Deactivated successfully. May 17 00:14:49.919588 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:14:49.919751 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. May 17 00:14:49.921597 systemd-logind[1575]: Removed session 22. May 17 00:14:50.076518 systemd[1]: Started sshd@22-49.12.39.64:22-139.178.68.195:41124.service - OpenSSH per-connection server daemon (139.178.68.195:41124). May 17 00:14:50.100616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a3069cb5c63f3bb4ce4d6067f67f3491ecdf12e5363eef6572965be284b115-rootfs.mount: Deactivated successfully. May 17 00:14:50.647881 containerd[1606]: time="2025-05-17T00:14:50.647720820Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:14:50.682487 containerd[1606]: time="2025-05-17T00:14:50.682418268Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117\"" May 17 00:14:50.684620 containerd[1606]: time="2025-05-17T00:14:50.684066034Z" level=info msg="StartContainer for \"cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117\"" May 17 00:14:50.752547 containerd[1606]: time="2025-05-17T00:14:50.752486247Z" level=info msg="StartContainer for \"cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117\" returns successfully" May 17 00:14:50.776452 containerd[1606]: time="2025-05-17T00:14:50.776119494Z" level=info msg="shim disconnected" id=cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117 namespace=k8s.io May 17 00:14:50.776452 containerd[1606]: time="2025-05-17T00:14:50.776295015Z" level=warning msg="cleaning up after shim disconnected" id=cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117 namespace=k8s.io May 17 00:14:50.776452 containerd[1606]: time="2025-05-17T00:14:50.776317495Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:51.068396 sshd[4926]: Accepted publickey for core from 139.178.68.195 port 41124 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:14:51.070507 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:14:51.075380 systemd-logind[1575]: New session 23 of user core. May 17 00:14:51.080690 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:14:51.100910 systemd[1]: run-containerd-runc-k8s.io-cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117-runc.YdTax7.mount: Deactivated successfully. May 17 00:14:51.101472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cee65d92a806bb17f7b3a790bc561eca838455c62f67bda72e947c73499fd117-rootfs.mount: Deactivated successfully. May 17 00:14:51.655417 containerd[1606]: time="2025-05-17T00:14:51.655365490Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:14:51.688304 containerd[1606]: time="2025-05-17T00:14:51.686906208Z" level=info msg="CreateContainer within sandbox \"e6dcff6bf2f29c731e3feff5ff86cad54ae97ec51414bdfe7c52b902e2e9e710\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43f2c3f514670869af40aa88d894f1fd45f916bc0fdc5bb33b0be5d2f1c073ef\"" May 17 00:14:51.699494 containerd[1606]: time="2025-05-17T00:14:51.699433055Z" level=info msg="StartContainer for \"43f2c3f514670869af40aa88d894f1fd45f916bc0fdc5bb33b0be5d2f1c073ef\"" May 17 00:14:51.829788 kubelet[2915]: E0517 00:14:51.829663 2915 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:14:51.837473 containerd[1606]: time="2025-05-17T00:14:51.837378410Z" level=info msg="StartContainer for \"43f2c3f514670869af40aa88d894f1fd45f916bc0fdc5bb33b0be5d2f1c073ef\" returns successfully" May 17 00:14:52.133301 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:14:52.675677 kubelet[2915]: I0517 00:14:52.675546 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9grff" podStartSLOduration=5.675526089 podStartE2EDuration="5.675526089s" podCreationTimestamp="2025-05-17 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:52.675312208 +0000 UTC m=+341.140010523" watchObservedRunningTime="2025-05-17 00:14:52.675526089 +0000 UTC m=+341.140224364" May 17 00:14:55.186039 systemd-networkd[1246]: lxc_health: Link UP May 17 00:14:55.198487 systemd-networkd[1246]: lxc_health: Gained carrier May 17 00:14:56.063339 systemd[1]: run-containerd-runc-k8s.io-43f2c3f514670869af40aa88d894f1fd45f916bc0fdc5bb33b0be5d2f1c073ef-runc.TLjTbp.mount: Deactivated successfully. May 17 00:14:57.222539 systemd-networkd[1246]: lxc_health: Gained IPv6LL May 17 00:14:58.365202 systemd[1]: run-containerd-runc-k8s.io-43f2c3f514670869af40aa88d894f1fd45f916bc0fdc5bb33b0be5d2f1c073ef-runc.54VDmk.mount: Deactivated successfully. May 17 00:15:03.017073 sshd[4926]: pam_unix(sshd:session): session closed for user core May 17 00:15:03.022326 systemd[1]: sshd@22-49.12.39.64:22-139.178.68.195:41124.service: Deactivated successfully. May 17 00:15:03.028648 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:15:03.030641 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. May 17 00:15:03.031736 systemd-logind[1575]: Removed session 23. May 17 00:15:11.663881 containerd[1606]: time="2025-05-17T00:15:11.663792074Z" level=info msg="StopPodSandbox for \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\"" May 17 00:15:11.664337 containerd[1606]: time="2025-05-17T00:15:11.663902514Z" level=info msg="TearDown network for sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" successfully" May 17 00:15:11.664337 containerd[1606]: time="2025-05-17T00:15:11.663917834Z" level=info msg="StopPodSandbox for \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" returns successfully" May 17 00:15:11.664919 containerd[1606]: time="2025-05-17T00:15:11.664878838Z" level=info msg="RemovePodSandbox for \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\"" May 17 00:15:11.665002 containerd[1606]: time="2025-05-17T00:15:11.664925399Z" level=info msg="Forcibly stopping sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\"" May 17 00:15:11.665027 containerd[1606]: time="2025-05-17T00:15:11.664998119Z" level=info msg="TearDown network for sandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" successfully" May 17 00:15:11.668541 containerd[1606]: time="2025-05-17T00:15:11.668496375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:11.668729 containerd[1606]: time="2025-05-17T00:15:11.668613615Z" level=info msg="RemovePodSandbox \"a0d6d2d8f50aa8617a60e62fa0db97ea8c9cc48e6a5a4aaf8e5c2c203f186080\" returns successfully" May 17 00:15:11.669474 containerd[1606]: time="2025-05-17T00:15:11.669265418Z" level=info msg="StopPodSandbox for \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\"" May 17 00:15:11.669474 containerd[1606]: time="2025-05-17T00:15:11.669375819Z" level=info msg="TearDown network for sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" successfully" May 17 00:15:11.669474 containerd[1606]: time="2025-05-17T00:15:11.669389459Z" level=info msg="StopPodSandbox for \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" returns successfully" May 17 00:15:11.669908 containerd[1606]: time="2025-05-17T00:15:11.669799660Z" level=info msg="RemovePodSandbox for \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\"" May 17 00:15:11.669908 containerd[1606]: time="2025-05-17T00:15:11.669826221Z" level=info msg="Forcibly stopping sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\"" May 17 00:15:11.669908 containerd[1606]: time="2025-05-17T00:15:11.669874501Z" level=info msg="TearDown network for sandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" successfully" May 17 00:15:11.673115 containerd[1606]: time="2025-05-17T00:15:11.673049155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:11.673274 containerd[1606]: time="2025-05-17T00:15:11.673126355Z" level=info msg="RemovePodSandbox \"142dd571353d48a7fc282e50acf1760d8c7133f3cc9b62772206b3b5679381e7\" returns successfully" May 17 00:15:18.244046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473-rootfs.mount: Deactivated successfully. May 17 00:15:18.262230 containerd[1606]: time="2025-05-17T00:15:18.262162832Z" level=info msg="shim disconnected" id=cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473 namespace=k8s.io May 17 00:15:18.263162 containerd[1606]: time="2025-05-17T00:15:18.262910915Z" level=warning msg="cleaning up after shim disconnected" id=cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473 namespace=k8s.io May 17 00:15:18.263162 containerd[1606]: time="2025-05-17T00:15:18.262961795Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:18.476211 kubelet[2915]: E0517 00:15:18.476143 2915 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36214->10.0.0.2:2379: read: connection timed out" May 17 00:15:18.502068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2-rootfs.mount: Deactivated successfully. May 17 00:15:18.508572 containerd[1606]: time="2025-05-17T00:15:18.508467549Z" level=info msg="shim disconnected" id=32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2 namespace=k8s.io May 17 00:15:18.508572 containerd[1606]: time="2025-05-17T00:15:18.508567669Z" level=warning msg="cleaning up after shim disconnected" id=32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2 namespace=k8s.io May 17 00:15:18.508765 containerd[1606]: time="2025-05-17T00:15:18.508591469Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:15:18.721117 kubelet[2915]: I0517 00:15:18.720774 2915 scope.go:117] "RemoveContainer" containerID="32709cc31ed0b3d65297fb7f012b755ea3e467e97e98efeadbf3a724ff7f10b2" May 17 00:15:18.724002 containerd[1606]: time="2025-05-17T00:15:18.723966081Z" level=info msg="CreateContainer within sandbox \"2e5015015e1b5ba79c7e20e5f94014a5f80e822c700060142e10d2c7e8f490fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:15:18.725237 kubelet[2915]: I0517 00:15:18.725180 2915 scope.go:117] "RemoveContainer" containerID="cf954bb681e0ac2de5e0b7d641c46670093375095b030c74176430f5f23a1473" May 17 00:15:18.727599 containerd[1606]: time="2025-05-17T00:15:18.727543338Z" level=info msg="CreateContainer within sandbox \"2c2cb3e1ae1ed000f99894e0fe2d01b78b4ddf11df5f223bb0bc2214bae58133\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:15:18.748494 containerd[1606]: time="2025-05-17T00:15:18.748440556Z" level=info msg="CreateContainer within sandbox \"2e5015015e1b5ba79c7e20e5f94014a5f80e822c700060142e10d2c7e8f490fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a87abe585d4c62be93a493cf539fadfa9574fd4e6253bb59a2a3e5008a68cc20\"" May 17 00:15:18.750487 containerd[1606]: time="2025-05-17T00:15:18.749664162Z" level=info msg="StartContainer for \"a87abe585d4c62be93a493cf539fadfa9574fd4e6253bb59a2a3e5008a68cc20\"" May 17 00:15:18.761003 containerd[1606]: time="2025-05-17T00:15:18.760837694Z" level=info msg="CreateContainer within sandbox \"2c2cb3e1ae1ed000f99894e0fe2d01b78b4ddf11df5f223bb0bc2214bae58133\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"970ce30ddde330afbe2b0ba9f6bcfd71cb8e70bb5354312853044fa2944189f9\"" May 17 00:15:18.762612 containerd[1606]: time="2025-05-17T00:15:18.762572582Z" level=info msg="StartContainer for \"970ce30ddde330afbe2b0ba9f6bcfd71cb8e70bb5354312853044fa2944189f9\"" May 17 00:15:18.828421 containerd[1606]: time="2025-05-17T00:15:18.828371412Z" level=info msg="StartContainer for \"a87abe585d4c62be93a493cf539fadfa9574fd4e6253bb59a2a3e5008a68cc20\" returns successfully" May 17 00:15:18.836659 containerd[1606]: time="2025-05-17T00:15:18.836587450Z" level=info msg="StartContainer for \"970ce30ddde330afbe2b0ba9f6bcfd71cb8e70bb5354312853044fa2944189f9\" returns successfully"