Jul 10 23:35:17.902523 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:35:17.902554 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Jul 10 22:12:17 -00 2025 Jul 10 23:35:17.902565 kernel: KASLR enabled Jul 10 23:35:17.902571 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 10 23:35:17.902576 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jul 10 23:35:17.902582 kernel: random: crng init done Jul 10 23:35:17.902589 kernel: secureboot: Secure boot disabled Jul 10 23:35:17.902594 kernel: ACPI: Early table checksum verification disabled Jul 10 23:35:17.902600 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 10 23:35:17.902608 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 10 23:35:17.902614 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902620 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902626 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902632 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902639 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902647 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902653 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902659 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902666 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:35:17.902672 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 23:35:17.902678 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 10 23:35:17.902684 kernel: NUMA: Failed to initialise from firmware Jul 10 23:35:17.902691 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 10 23:35:17.902697 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jul 10 23:35:17.902703 kernel: Zone ranges: Jul 10 23:35:17.902710 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 10 23:35:17.902716 kernel: DMA32 empty Jul 10 23:35:17.902722 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 10 23:35:17.902728 kernel: Movable zone start for each node Jul 10 23:35:17.902735 kernel: Early memory node ranges Jul 10 23:35:17.902741 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jul 10 23:35:17.902747 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jul 10 23:35:17.902753 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jul 10 23:35:17.902760 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 10 23:35:17.902766 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 10 23:35:17.902772 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 10 23:35:17.902778 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 10 23:35:17.902785 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 10 23:35:17.902791 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 10 23:35:17.902797 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 10 23:35:17.902806 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 10 23:35:17.902813 kernel: psci: probing for conduit method from ACPI. Jul 10 23:35:17.902819 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:35:17.902827 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:35:17.902833 kernel: psci: Trusted OS migration not required Jul 10 23:35:17.902840 kernel: psci: SMC Calling Convention v1.1 Jul 10 23:35:17.902846 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 23:35:17.902853 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 23:35:17.902859 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 23:35:17.902866 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 10 23:35:17.902872 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:35:17.902879 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:35:17.902885 kernel: CPU features: detected: Hardware dirty bit management Jul 10 23:35:17.902896 kernel: CPU features: detected: Spectre-v4 Jul 10 23:35:17.902905 kernel: CPU features: detected: Spectre-BHB Jul 10 23:35:17.902912 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:35:17.902919 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:35:17.902927 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:35:17.902934 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:35:17.902940 kernel: alternatives: applying boot alternatives Jul 10 23:35:17.902948 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:35:17.902955 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:35:17.902962 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:35:17.902969 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:35:17.902978 kernel: Fallback order for Node 0: 0 Jul 10 23:35:17.902984 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 10 23:35:17.902990 kernel: Policy zone: Normal Jul 10 23:35:17.902997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:35:17.903003 kernel: software IO TLB: area num 2. Jul 10 23:35:17.903010 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 10 23:35:17.903017 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Jul 10 23:35:17.903023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 23:35:17.903030 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:35:17.903038 kernel: rcu: RCU event tracing is enabled. Jul 10 23:35:17.903044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 23:35:17.903051 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:35:17.903059 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:35:17.903066 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:35:17.903072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 23:35:17.903079 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:35:17.903085 kernel: GICv3: 256 SPIs implemented Jul 10 23:35:17.903092 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:35:17.903098 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:35:17.903105 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:35:17.903111 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 23:35:17.903117 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 23:35:17.903124 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 23:35:17.903133 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 23:35:17.903139 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 10 23:35:17.903146 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 10 23:35:17.903152 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:35:17.903158 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:35:17.903165 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:35:17.903172 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:35:17.903178 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:35:17.903185 kernel: Console: colour dummy device 80x25 Jul 10 23:35:17.903191 kernel: ACPI: Core revision 20230628 Jul 10 23:35:17.903198 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:35:17.903206 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:35:17.903213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 23:35:17.903219 kernel: landlock: Up and running. Jul 10 23:35:17.903226 kernel: SELinux: Initializing. Jul 10 23:35:17.903233 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:35:17.903239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:35:17.903246 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:35:17.903253 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:35:17.903260 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:35:17.903268 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:35:17.903274 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 23:35:17.903281 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 23:35:17.903288 kernel: Remapping and enabling EFI services. Jul 10 23:35:17.903295 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:35:17.903301 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:35:17.903308 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 23:35:17.903315 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 10 23:35:17.903321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:35:17.903330 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:35:17.903337 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 23:35:17.903349 kernel: SMP: Total of 2 processors activated. Jul 10 23:35:17.903357 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:35:17.903364 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:35:17.903371 kernel: CPU features: detected: Common not Private translations Jul 10 23:35:17.903378 kernel: CPU features: detected: CRC32 instructions Jul 10 23:35:17.903385 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 23:35:17.903404 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:35:17.903415 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:35:17.903422 kernel: CPU features: detected: Privileged Access Never Jul 10 23:35:17.903429 kernel: CPU features: detected: RAS Extension Support Jul 10 23:35:17.903440 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 23:35:17.903447 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:35:17.903454 kernel: alternatives: applying system-wide alternatives Jul 10 23:35:17.903461 kernel: devtmpfs: initialized Jul 10 23:35:17.903468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:35:17.903477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 23:35:17.903485 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:35:17.903492 kernel: SMBIOS 3.0.0 present. Jul 10 23:35:17.904620 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 10 23:35:17.904633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:35:17.904641 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:35:17.904649 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:35:17.904656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:35:17.904664 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:35:17.904679 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jul 10 23:35:17.904687 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:35:17.904695 kernel: cpuidle: using governor menu Jul 10 23:35:17.904702 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:35:17.904710 kernel: ASID allocator initialised with 32768 entries Jul 10 23:35:17.904717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:35:17.904724 kernel: Serial: AMBA PL011 UART driver Jul 10 23:35:17.904731 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:35:17.904739 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 23:35:17.904747 kernel: Modules: 509264 pages in range for PLT usage Jul 10 23:35:17.904755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:35:17.904762 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:35:17.904769 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:35:17.904776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:35:17.904784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:35:17.904791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:35:17.904801 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:35:17.904811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:35:17.904820 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:35:17.904828 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:35:17.904835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:35:17.904842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:35:17.904849 kernel: ACPI: Interpreter enabled Jul 10 23:35:17.904857 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:35:17.904864 kernel: ACPI: MCFG table detected, 1 entries Jul 10 23:35:17.904871 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:35:17.904878 kernel: printk: console [ttyAMA0] enabled Jul 10 23:35:17.904889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 23:35:17.905083 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 23:35:17.905157 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 23:35:17.905222 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 23:35:17.905285 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 23:35:17.905348 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 23:35:17.905357 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 23:35:17.905368 kernel: PCI host bridge to bus 0000:00 Jul 10 23:35:17.905463 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 23:35:17.906651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 23:35:17.906748 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 23:35:17.906810 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 23:35:17.906900 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 23:35:17.906981 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 10 23:35:17.907070 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 10 23:35:17.907139 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 10 23:35:17.907215 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.907280 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 10 23:35:17.907355 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.907445 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 10 23:35:17.908732 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.908833 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 10 23:35:17.908911 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.908988 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 10 23:35:17.909078 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.909157 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 10 23:35:17.909243 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.909311 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 10 23:35:17.909381 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.911561 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 10 23:35:17.911811 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.911881 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 10 23:35:17.911972 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 10 23:35:17.912039 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 10 23:35:17.912115 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 10 23:35:17.912180 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 10 23:35:17.912257 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 10 23:35:17.912325 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 10 23:35:17.912419 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 23:35:17.912494 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 10 23:35:17.912597 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 10 23:35:17.912666 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 10 23:35:17.912749 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 10 23:35:17.912818 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 10 23:35:17.912885 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 10 23:35:17.912969 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 10 23:35:17.913037 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 10 23:35:17.913113 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 10 23:35:17.913181 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 10 23:35:17.913257 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 10 23:35:17.913325 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 10 23:35:17.913510 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 10 23:35:17.913609 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 10 23:35:17.913679 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 10 23:35:17.913746 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 10 23:35:17.913873 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 10 23:35:17.913948 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 10 23:35:17.914014 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 10 23:35:17.914092 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 10 23:35:17.914164 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 10 23:35:17.914229 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 10 23:35:17.914295 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 10 23:35:17.914365 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 10 23:35:17.914484 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 10 23:35:17.914602 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 10 23:35:17.914684 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 10 23:35:17.914749 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 10 23:35:17.914813 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 10 23:35:17.914883 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 10 23:35:17.914948 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 10 23:35:17.915022 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jul 10 23:35:17.915093 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 10 23:35:17.915712 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 10 23:35:17.915801 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 10 23:35:17.915873 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 10 23:35:17.915937 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 10 23:35:17.915998 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 10 23:35:17.916066 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 10 23:35:17.916130 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 10 23:35:17.916192 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 10 23:35:17.916263 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 10 23:35:17.916327 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 10 23:35:17.916447 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 10 23:35:17.917628 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 10 23:35:17.917720 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 10 23:35:17.917793 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 10 23:35:17.917860 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 10 23:35:17.917936 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 10 23:35:17.918002 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 10 23:35:17.918072 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 10 23:35:17.918148 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 10 23:35:17.918230 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 10 23:35:17.918298 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 10 23:35:17.918368 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 10 23:35:17.918463 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 10 23:35:17.919625 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 10 23:35:17.919711 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 10 23:35:17.919781 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 10 23:35:17.919846 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 10 23:35:17.920055 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 10 23:35:17.920140 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 10 23:35:17.920241 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 10 23:35:17.920313 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 10 23:35:17.920383 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 10 23:35:17.920468 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 10 23:35:17.922714 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 10 23:35:17.922824 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 10 23:35:17.922897 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 10 23:35:17.922978 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 10 23:35:17.923047 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 10 23:35:17.923115 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 10 23:35:17.923188 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 10 23:35:17.923258 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 10 23:35:17.923331 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 10 23:35:17.923420 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 10 23:35:17.923510 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 10 23:35:17.923590 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 10 23:35:17.923663 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 10 23:35:17.923731 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 10 23:35:17.923800 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 10 23:35:17.923868 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 10 23:35:17.923941 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 10 23:35:17.924023 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 10 23:35:17.924096 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 23:35:17.924171 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 10 23:35:17.924249 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 10 23:35:17.924330 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 10 23:35:17.924453 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 10 23:35:17.927485 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 10 23:35:17.927713 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 10 23:35:17.927787 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 10 23:35:17.927862 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 10 23:35:17.927926 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 10 23:35:17.927988 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 10 23:35:17.928061 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 10 23:35:17.928128 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 10 23:35:17.928197 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 10 23:35:17.928262 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 10 23:35:17.928324 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 10 23:35:17.928398 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 10 23:35:17.928481 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 10 23:35:17.929751 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 10 23:35:17.929845 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 10 23:35:17.929915 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 10 23:35:17.929987 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 10 23:35:17.930064 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 10 23:35:17.930134 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 10 23:35:17.930230 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 10 23:35:17.930312 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 10 23:35:17.930402 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 10 23:35:17.930490 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 10 23:35:17.930676 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 10 23:35:17.930759 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 10 23:35:17.930824 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 10 23:35:17.930888 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 10 23:35:17.931035 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 10 23:35:17.931127 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 10 23:35:17.931197 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 10 23:35:17.931264 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 10 23:35:17.931332 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 10 23:35:17.931473 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 10 23:35:17.933275 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 10 23:35:17.933370 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 10 23:35:17.933537 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 10 23:35:17.933612 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 10 23:35:17.933676 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 10 23:35:17.933741 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 10 23:35:17.933810 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 10 23:35:17.933886 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 10 23:35:17.933949 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 10 23:35:17.934013 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 10 23:35:17.934082 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 23:35:17.934141 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 23:35:17.934197 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 23:35:17.934272 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 10 23:35:17.934337 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 10 23:35:17.934414 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 10 23:35:17.934488 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 10 23:35:17.934877 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 10 23:35:17.934946 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 10 23:35:17.935019 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 10 23:35:17.935079 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 10 23:35:17.935150 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 10 23:35:17.935240 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 10 23:35:17.935313 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 10 23:35:17.935373 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 10 23:35:17.935472 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 10 23:35:17.936950 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 10 23:35:17.937043 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 10 23:35:17.937114 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 10 23:35:17.937179 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 10 23:35:17.937243 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 10 23:35:17.937314 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 10 23:35:17.937378 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 10 23:35:17.937458 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 10 23:35:17.937585 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 10 23:35:17.937652 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 10 23:35:17.937712 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 10 23:35:17.937781 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 10 23:35:17.937847 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 10 23:35:17.937906 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 10 23:35:17.937915 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 23:35:17.937923 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 23:35:17.937931 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 23:35:17.937939 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 23:35:17.937947 kernel: iommu: Default domain type: Translated Jul 10 23:35:17.937954 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:35:17.937962 kernel: efivars: Registered efivars operations Jul 10 23:35:17.937972 kernel: vgaarb: loaded Jul 10 23:35:17.937979 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:35:17.937987 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:35:17.937995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:35:17.938002 kernel: pnp: PnP ACPI init Jul 10 23:35:17.938082 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 23:35:17.938093 kernel: pnp: PnP ACPI: found 1 devices Jul 10 23:35:17.938100 kernel: NET: Registered PF_INET protocol family Jul 10 23:35:17.938111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:35:17.938119 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:35:17.938126 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:35:17.938134 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:35:17.938142 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:35:17.938150 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:35:17.938157 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:35:17.938165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:35:17.938173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:35:17.938256 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 10 23:35:17.938268 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:35:17.938275 kernel: kvm [1]: HYP mode not available Jul 10 23:35:17.938283 kernel: Initialise system trusted keyrings Jul 10 23:35:17.938291 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:35:17.938298 kernel: Key type asymmetric registered Jul 10 23:35:17.938306 kernel: Asymmetric key parser 'x509' registered Jul 10 23:35:17.938313 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 23:35:17.938321 kernel: io scheduler mq-deadline registered Jul 10 23:35:17.938333 kernel: io scheduler kyber registered Jul 10 23:35:17.938340 kernel: io scheduler bfq registered Jul 10 23:35:17.938349 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 10 23:35:17.938484 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 10 23:35:17.938720 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 10 23:35:17.938787 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.938857 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 10 23:35:17.938936 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 10 23:35:17.938999 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.939068 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 10 23:35:17.939132 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 10 23:35:17.939196 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.939264 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 10 23:35:17.939333 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 10 23:35:17.939414 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.939490 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 10 23:35:17.939602 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 10 23:35:17.939668 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.939738 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 10 23:35:17.939809 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 10 23:35:17.939873 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.939946 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 10 23:35:17.940012 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 10 23:35:17.940075 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.940146 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 10 23:35:17.940214 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 10 23:35:17.940279 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.940290 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 10 23:35:17.940358 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 10 23:35:17.940488 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 10 23:35:17.940631 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 10 23:35:17.940649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 23:35:17.940656 kernel: ACPI: button: Power Button [PWRB] Jul 10 23:35:17.940665 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 23:35:17.940771 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 10 23:35:17.940865 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 10 23:35:17.940877 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:35:17.940887 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 10 23:35:17.940973 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 10 23:35:17.940985 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 10 23:35:17.940997 kernel: thunder_xcv, ver 1.0 Jul 10 23:35:17.941006 kernel: thunder_bgx, ver 1.0 Jul 10 23:35:17.941013 kernel: nicpf, ver 1.0 Jul 10 23:35:17.941021 kernel: nicvf, ver 1.0 Jul 10 23:35:17.941103 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:35:17.941164 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:35:17 UTC (1752190517) Jul 10 23:35:17.941174 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:35:17.941182 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 23:35:17.941192 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 23:35:17.941199 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:35:17.941207 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:35:17.941214 kernel: Segment Routing with IPv6 Jul 10 23:35:17.941222 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:35:17.941229 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:35:17.941237 kernel: Key type dns_resolver registered Jul 10 23:35:17.941244 kernel: registered taskstats version 1 Jul 10 23:35:17.941252 kernel: Loading compiled-in X.509 certificates Jul 10 23:35:17.941261 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 31389229b1c1b066a3aecee2ec344e038e2f2cc0' Jul 10 23:35:17.941268 kernel: Key type .fscrypt registered Jul 10 23:35:17.941276 kernel: Key type fscrypt-provisioning registered Jul 10 23:35:17.941283 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:35:17.941291 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:35:17.941298 kernel: ima: No architecture policies found Jul 10 23:35:17.941306 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:35:17.941313 kernel: clk: Disabling unused clocks Jul 10 23:35:17.941321 kernel: Freeing unused kernel memory: 38336K Jul 10 23:35:17.941330 kernel: Run /init as init process Jul 10 23:35:17.941340 kernel: with arguments: Jul 10 23:35:17.941348 kernel: /init Jul 10 23:35:17.941355 kernel: with environment: Jul 10 23:35:17.941362 kernel: HOME=/ Jul 10 23:35:17.941369 kernel: TERM=linux Jul 10 23:35:17.941377 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:35:17.941385 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:35:17.941416 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:35:17.941427 systemd[1]: Detected virtualization kvm. Jul 10 23:35:17.941437 systemd[1]: Detected architecture arm64. Jul 10 23:35:17.941446 systemd[1]: Running in initrd. Jul 10 23:35:17.941456 systemd[1]: No hostname configured, using default hostname. Jul 10 23:35:17.941464 systemd[1]: Hostname set to . Jul 10 23:35:17.941472 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:35:17.941480 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:35:17.941490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:17.941608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:17.941619 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:35:17.941627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:35:17.941635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:35:17.941644 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:35:17.941653 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:35:17.941666 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:35:17.941674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:17.941682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:17.941690 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:35:17.941698 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:35:17.941706 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:35:17.941714 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:35:17.941722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:35:17.941732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:35:17.941740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:35:17.941748 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:35:17.941756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:17.941764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:17.941772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:17.941780 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:35:17.941788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:35:17.941796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:35:17.941806 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:35:17.941814 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:35:17.941822 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:35:17.941830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:35:17.941838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:17.941846 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:35:17.941855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:17.941904 systemd-journald[236]: Collecting audit messages is disabled. Jul 10 23:35:17.941928 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:35:17.941937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:35:17.941945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:17.941953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:35:17.941962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:17.941970 kernel: Bridge firewalling registered Jul 10 23:35:17.941978 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:35:17.941986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:17.941996 systemd-journald[236]: Journal started Jul 10 23:35:17.942017 systemd-journald[236]: Runtime Journal (/run/log/journal/f3a53f3557ee47d5a169ac7de2447047) is 8M, max 76.6M, 68.6M free. Jul 10 23:35:17.918174 systemd-modules-load[237]: Inserted module 'overlay' Jul 10 23:35:17.943264 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:35:17.935554 systemd-modules-load[237]: Inserted module 'br_netfilter' Jul 10 23:35:17.949779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:17.956608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:35:17.958961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:35:17.967617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:17.975317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:17.980876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:17.999050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:35:18.001100 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:18.004693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:35:18.023578 dracut-cmdline[275]: dracut-dracut-053 Jul 10 23:35:18.026676 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:35:18.036241 systemd-resolved[273]: Positive Trust Anchors: Jul 10 23:35:18.036262 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:35:18.036292 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:35:18.042593 systemd-resolved[273]: Defaulting to hostname 'linux'. Jul 10 23:35:18.043661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:35:18.046314 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:18.131585 kernel: SCSI subsystem initialized Jul 10 23:35:18.136537 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:35:18.145601 kernel: iscsi: registered transport (tcp) Jul 10 23:35:18.160582 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:35:18.160682 kernel: QLogic iSCSI HBA Driver Jul 10 23:35:18.220236 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:35:18.227794 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:35:18.254083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:35:18.254172 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:35:18.254188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 23:35:18.309813 kernel: raid6: neonx8 gen() 15615 MB/s Jul 10 23:35:18.326575 kernel: raid6: neonx4 gen() 15706 MB/s Jul 10 23:35:18.343634 kernel: raid6: neonx2 gen() 13176 MB/s Jul 10 23:35:18.360587 kernel: raid6: neonx1 gen() 10426 MB/s Jul 10 23:35:18.377556 kernel: raid6: int64x8 gen() 6654 MB/s Jul 10 23:35:18.394574 kernel: raid6: int64x4 gen() 7264 MB/s Jul 10 23:35:18.411570 kernel: raid6: int64x2 gen() 5875 MB/s Jul 10 23:35:18.429062 kernel: raid6: int64x1 gen() 4905 MB/s Jul 10 23:35:18.429148 kernel: raid6: using algorithm neonx4 gen() 15706 MB/s Jul 10 23:35:18.445561 kernel: raid6: .... xor() 12223 MB/s, rmw enabled Jul 10 23:35:18.445643 kernel: raid6: using neon recovery algorithm Jul 10 23:35:18.450595 kernel: xor: measuring software checksum speed Jul 10 23:35:18.450684 kernel: 8regs : 20034 MB/sec Jul 10 23:35:18.451714 kernel: 32regs : 21699 MB/sec Jul 10 23:35:18.451770 kernel: arm64_neon : 27775 MB/sec Jul 10 23:35:18.451783 kernel: xor: using function: arm64_neon (27775 MB/sec) Jul 10 23:35:18.505536 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:35:18.520748 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:35:18.528835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:18.547029 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jul 10 23:35:18.551156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:18.561142 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:35:18.577346 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 10 23:35:18.617584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:35:18.624689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:35:18.675878 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:18.686718 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:35:18.704596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:35:18.708578 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:35:18.709229 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:18.711521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:35:18.723879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:35:18.742715 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:35:18.803686 kernel: scsi host0: Virtio SCSI HBA Jul 10 23:35:18.807770 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 23:35:18.807877 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 10 23:35:18.832782 kernel: ACPI: bus type USB registered Jul 10 23:35:18.832853 kernel: usbcore: registered new interface driver usbfs Jul 10 23:35:18.837775 kernel: usbcore: registered new interface driver hub Jul 10 23:35:18.837849 kernel: usbcore: registered new device driver usb Jul 10 23:35:18.853977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:35:18.854116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:18.857127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:18.858112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:35:18.858310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:18.859972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:18.869028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:18.878641 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 10 23:35:18.878933 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 10 23:35:18.879029 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 23:35:18.877669 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:35:18.881520 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 10 23:35:18.885883 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 10 23:35:18.890915 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 10 23:35:18.891016 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 10 23:35:18.891099 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 10 23:35:18.891181 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 10 23:35:18.901882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 23:35:18.901956 kernel: GPT:17805311 != 80003071 Jul 10 23:35:18.901980 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 23:35:18.901990 kernel: GPT:17805311 != 80003071 Jul 10 23:35:18.901999 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 23:35:18.902008 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:35:18.902018 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 10 23:35:18.913946 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 10 23:35:18.914212 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 10 23:35:18.916055 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 10 23:35:18.917958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:18.921847 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 10 23:35:18.922093 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 10 23:35:18.924538 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 10 23:35:18.924773 kernel: hub 1-0:1.0: USB hub found Jul 10 23:35:18.927603 kernel: hub 1-0:1.0: 4 ports detected Jul 10 23:35:18.928013 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 10 23:35:18.928550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:35:18.930539 kernel: hub 2-0:1.0: USB hub found Jul 10 23:35:18.932546 kernel: hub 2-0:1.0: 4 ports detected Jul 10 23:35:18.961231 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:18.986530 kernel: BTRFS: device fsid 28ea517e-145c-4223-93e8-6347aefbc032 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (502) Jul 10 23:35:19.003558 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (531) Jul 10 23:35:19.006295 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 10 23:35:19.015545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 10 23:35:19.028343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 10 23:35:19.029526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 10 23:35:19.038661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 10 23:35:19.050797 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:35:19.062339 disk-uuid[575]: Primary Header is updated. Jul 10 23:35:19.062339 disk-uuid[575]: Secondary Entries is updated. Jul 10 23:35:19.062339 disk-uuid[575]: Secondary Header is updated. Jul 10 23:35:19.068535 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:35:19.165552 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 10 23:35:19.300613 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 10 23:35:19.300690 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 10 23:35:19.301528 kernel: usbcore: registered new interface driver usbhid Jul 10 23:35:19.301553 kernel: usbhid: USB HID core driver Jul 10 23:35:19.407578 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 10 23:35:19.538678 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 10 23:35:19.593711 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 10 23:35:20.086905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:35:20.086969 disk-uuid[576]: The operation has completed successfully. Jul 10 23:35:20.153034 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:35:20.153164 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:35:20.182851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:35:20.187983 sh[591]: Success Jul 10 23:35:20.207810 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 23:35:20.270892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:35:20.282826 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:35:20.284234 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:35:20.316219 kernel: BTRFS info (device dm-0): first mount of filesystem 28ea517e-145c-4223-93e8-6347aefbc032 Jul 10 23:35:20.316299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:20.316315 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 23:35:20.316331 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 23:35:20.317611 kernel: BTRFS info (device dm-0): using free space tree Jul 10 23:35:20.326583 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 10 23:35:20.328891 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:35:20.330373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:35:20.335797 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:35:20.338817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:35:20.372695 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:20.372778 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:20.372795 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:35:20.378877 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 23:35:20.378965 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:35:20.385724 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:20.390136 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:35:20.397786 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:35:20.502757 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:35:20.510851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:35:20.511935 ignition[676]: Ignition 2.20.0 Jul 10 23:35:20.511944 ignition[676]: Stage: fetch-offline Jul 10 23:35:20.511989 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:20.514801 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:35:20.511998 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:20.512162 ignition[676]: parsed url from cmdline: "" Jul 10 23:35:20.512166 ignition[676]: no config URL provided Jul 10 23:35:20.512170 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:35:20.512177 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:35:20.512183 ignition[676]: failed to fetch config: resource requires networking Jul 10 23:35:20.512457 ignition[676]: Ignition finished successfully Jul 10 23:35:20.545871 systemd-networkd[775]: lo: Link UP Jul 10 23:35:20.545881 systemd-networkd[775]: lo: Gained carrier Jul 10 23:35:20.549000 systemd-networkd[775]: Enumeration completed Jul 10 23:35:20.549510 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:20.549514 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:20.550132 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:20.550136 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:20.550731 systemd-networkd[775]: eth0: Link UP Jul 10 23:35:20.550734 systemd-networkd[775]: eth0: Gained carrier Jul 10 23:35:20.550742 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:20.551366 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:35:20.552864 systemd[1]: Reached target network.target - Network. Jul 10 23:35:20.558890 systemd-networkd[775]: eth1: Link UP Jul 10 23:35:20.558893 systemd-networkd[775]: eth1: Gained carrier Jul 10 23:35:20.558905 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:20.566954 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 23:35:20.582532 ignition[779]: Ignition 2.20.0 Jul 10 23:35:20.582543 ignition[779]: Stage: fetch Jul 10 23:35:20.582861 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:20.582873 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:20.582983 ignition[779]: parsed url from cmdline: "" Jul 10 23:35:20.582987 ignition[779]: no config URL provided Jul 10 23:35:20.582992 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:35:20.583000 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:35:20.583092 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 10 23:35:20.583975 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 10 23:35:20.593625 systemd-networkd[775]: eth0: DHCPv4 address 49.13.217.224/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 10 23:35:20.618621 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:35:20.785245 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 10 23:35:20.792782 ignition[779]: GET result: OK Jul 10 23:35:20.792955 ignition[779]: parsing config with SHA512: a1e5b57415cfa77e52592794a8ede0a181ba2e04d7ac85a4f499db151b1e22785bbc8b2174a888335864c45b8e7210c0a658ba3bcc2dbd6d400ee855cbf6d79f Jul 10 23:35:20.801744 unknown[779]: fetched base config from "system" Jul 10 23:35:20.802162 ignition[779]: fetch: fetch complete Jul 10 23:35:20.801755 unknown[779]: fetched base config from "system" Jul 10 23:35:20.802167 ignition[779]: fetch: fetch passed Jul 10 23:35:20.801764 unknown[779]: fetched user config from "hetzner" Jul 10 23:35:20.802223 ignition[779]: Ignition finished successfully Jul 10 23:35:20.806641 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 23:35:20.816921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:35:20.834434 ignition[787]: Ignition 2.20.0 Jul 10 23:35:20.834452 ignition[787]: Stage: kargs Jul 10 23:35:20.834693 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:20.834703 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:20.836880 ignition[787]: kargs: kargs passed Jul 10 23:35:20.836967 ignition[787]: Ignition finished successfully Jul 10 23:35:20.839666 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:35:20.847831 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:35:20.863986 ignition[794]: Ignition 2.20.0 Jul 10 23:35:20.864000 ignition[794]: Stage: disks Jul 10 23:35:20.864213 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:20.866824 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:35:20.864224 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:20.865373 ignition[794]: disks: disks passed Jul 10 23:35:20.868298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:35:20.865448 ignition[794]: Ignition finished successfully Jul 10 23:35:20.869657 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:35:20.870721 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:35:20.871846 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:35:20.872451 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:35:20.885547 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:35:20.906376 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 10 23:35:20.910691 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:35:20.916740 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:35:20.968566 kernel: EXT4-fs (sda9): mounted filesystem ef1c88fa-d23e-4a16-bbbf-07c92f8585ec r/w with ordered data mode. Quota mode: none. Jul 10 23:35:20.969396 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:35:20.970853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:35:20.980808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:35:20.984691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:35:20.993749 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 23:35:20.999728 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:35:21.003877 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (811) Jul 10 23:35:21.003906 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:21.003917 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:21.003927 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:35:20.999783 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:35:21.004109 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:35:21.012686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:35:21.017125 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 23:35:21.017203 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:35:21.021061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:35:21.071731 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:35:21.075771 coreos-metadata[813]: Jul 10 23:35:21.075 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 10 23:35:21.079678 coreos-metadata[813]: Jul 10 23:35:21.079 INFO Fetch successful Jul 10 23:35:21.080279 coreos-metadata[813]: Jul 10 23:35:21.079 INFO wrote hostname ci-4230-2-1-n-56a4dae949 to /sysroot/etc/hostname Jul 10 23:35:21.081708 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:35:21.086653 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:35:21.090434 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:35:21.095946 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:35:21.215176 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:35:21.221750 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:35:21.225111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:35:21.236673 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:21.270365 ignition[928]: INFO : Ignition 2.20.0 Jul 10 23:35:21.270365 ignition[928]: INFO : Stage: mount Jul 10 23:35:21.271478 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:21.271478 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:21.274052 ignition[928]: INFO : mount: mount passed Jul 10 23:35:21.274052 ignition[928]: INFO : Ignition finished successfully Jul 10 23:35:21.273776 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:35:21.280703 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:35:21.282243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:35:21.313685 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:35:21.321832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:35:21.346747 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (939) Jul 10 23:35:21.348913 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:35:21.349011 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:35:21.349031 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:35:21.353711 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 23:35:21.353794 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:35:21.356827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:35:21.382618 ignition[956]: INFO : Ignition 2.20.0 Jul 10 23:35:21.382618 ignition[956]: INFO : Stage: files Jul 10 23:35:21.383899 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:21.383899 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:21.385693 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:35:21.386403 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:35:21.386403 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:35:21.390421 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:35:21.392109 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:35:21.392109 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:35:21.391163 unknown[956]: wrote ssh authorized keys file for user: core Jul 10 23:35:21.395343 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:35:21.395343 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 23:35:21.482030 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:35:21.611315 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:35:21.611315 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:35:21.614101 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 23:35:21.792828 systemd-networkd[775]: eth1: Gained IPv6LL Jul 10 23:35:22.194853 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 23:35:22.293696 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:35:22.293696 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:35:22.293696 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:35:22.298631 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 23:35:22.305193 systemd-networkd[775]: eth0: Gained IPv6LL Jul 10 23:35:22.854674 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 23:35:23.042571 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:35:23.042571 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:35:23.046826 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:35:23.046826 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:35:23.046826 ignition[956]: INFO : files: files passed Jul 10 23:35:23.046826 ignition[956]: INFO : Ignition finished successfully Jul 10 23:35:23.047875 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:35:23.055993 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:35:23.060365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:35:23.067180 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:35:23.067299 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:35:23.081317 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:23.081317 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:23.085608 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:35:23.087235 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:35:23.088321 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:35:23.093771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:35:23.126865 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:35:23.127019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:35:23.130445 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:35:23.131465 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:35:23.132962 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:35:23.134643 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:35:23.161618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:35:23.174804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:35:23.189576 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:23.190255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:23.192234 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:35:23.193929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:35:23.194127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:35:23.195369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:35:23.196557 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:35:23.198004 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:35:23.199613 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:35:23.200362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:35:23.201659 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:35:23.202687 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:35:23.203868 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:35:23.205071 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:35:23.206322 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:35:23.207332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:35:23.207590 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:35:23.209103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:23.209816 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:23.211165 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:35:23.212231 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:23.213045 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:35:23.213230 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:35:23.214632 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:35:23.215037 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:35:23.215886 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:35:23.216057 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:35:23.216871 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 23:35:23.217042 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:35:23.228015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:35:23.229412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:35:23.229815 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:23.234897 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:35:23.235918 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:35:23.236150 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:23.239120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:35:23.239289 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:35:23.255204 ignition[1009]: INFO : Ignition 2.20.0 Jul 10 23:35:23.257156 ignition[1009]: INFO : Stage: umount Jul 10 23:35:23.257156 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:35:23.257156 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 10 23:35:23.256242 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:35:23.263428 ignition[1009]: INFO : umount: umount passed Jul 10 23:35:23.263428 ignition[1009]: INFO : Ignition finished successfully Jul 10 23:35:23.258308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:35:23.263112 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:35:23.263738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:35:23.266216 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:35:23.266339 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:35:23.269122 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:35:23.269206 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:35:23.270713 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 23:35:23.270772 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 23:35:23.271835 systemd[1]: Stopped target network.target - Network. Jul 10 23:35:23.273816 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:35:23.273912 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:35:23.275707 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:35:23.277024 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:35:23.278450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:23.279671 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:35:23.280494 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:35:23.283290 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:35:23.283345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:35:23.284192 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:35:23.284232 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:35:23.287159 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:35:23.287240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:35:23.288173 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:35:23.288231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:35:23.291315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:35:23.293648 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:35:23.299686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:35:23.300323 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:35:23.300453 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:35:23.304548 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:35:23.304875 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:35:23.304968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:35:23.306867 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:35:23.306980 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:35:23.308158 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:35:23.308222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:23.311129 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:35:23.314457 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:35:23.314652 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:35:23.317813 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:35:23.318026 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:35:23.318059 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:23.323710 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:35:23.324230 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:35:23.324316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:35:23.326623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:35:23.326696 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:23.328653 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:35:23.328706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:23.329872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:23.332441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:35:23.345862 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:35:23.346028 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:35:23.347830 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:35:23.347984 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:23.349793 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:35:23.349922 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:23.350806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:35:23.350847 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:23.353244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:35:23.353372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:35:23.354770 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:35:23.354822 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:35:23.356177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:35:23.356225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:35:23.366793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:35:23.367356 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:35:23.367452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:23.368327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:35:23.368394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:23.376976 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:35:23.377255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:35:23.378834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:35:23.387843 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:35:23.398999 systemd[1]: Switching root. Jul 10 23:35:23.434093 systemd-journald[236]: Journal stopped Jul 10 23:35:24.615286 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jul 10 23:35:24.615413 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:35:24.615436 kernel: SELinux: policy capability open_perms=1 Jul 10 23:35:24.615446 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:35:24.615455 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:35:24.615464 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:35:24.615474 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:35:24.615483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:35:24.617136 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:35:24.617197 kernel: audit: type=1403 audit(1752190523.632:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:35:24.617210 systemd[1]: Successfully loaded SELinux policy in 39.133ms. Jul 10 23:35:24.617239 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.271ms. Jul 10 23:35:24.617251 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:35:24.617262 systemd[1]: Detected virtualization kvm. Jul 10 23:35:24.617272 systemd[1]: Detected architecture arm64. Jul 10 23:35:24.617282 systemd[1]: Detected first boot. Jul 10 23:35:24.617292 systemd[1]: Hostname set to . Jul 10 23:35:24.617307 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:35:24.617317 zram_generator::config[1054]: No configuration found. Jul 10 23:35:24.617330 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:35:24.617343 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:35:24.617356 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:35:24.617368 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:35:24.617397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:35:24.617408 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:35:24.617422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:35:24.617432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:35:24.617444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:35:24.617456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:35:24.617468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:35:24.617480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:35:24.617493 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:35:24.617647 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:35:24.617668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:35:24.617679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:35:24.617690 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:35:24.617700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:35:24.617711 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:35:24.617721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:35:24.617731 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:35:24.617742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:35:24.617754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:35:24.617764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:35:24.617774 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:35:24.617784 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:35:24.617795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:35:24.617812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:35:24.617823 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:35:24.617833 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:35:24.617844 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:35:24.617855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:35:24.617866 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:35:24.617876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:35:24.617887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:35:24.617897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:35:24.617907 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:35:24.617917 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:35:24.617931 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:35:24.617943 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:35:24.617953 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:35:24.617963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:35:24.617972 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:35:24.617983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:35:24.617995 systemd[1]: Reached target machines.target - Containers. Jul 10 23:35:24.618005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:35:24.618016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:24.618026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:35:24.618037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:35:24.618048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:35:24.618057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:35:24.618068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:35:24.618078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:35:24.618089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:35:24.618103 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:35:24.618114 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:35:24.618125 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:35:24.618135 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:35:24.618145 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:35:24.618156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:24.618166 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:35:24.618179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:35:24.618189 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:35:24.618199 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:35:24.618210 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:35:24.618220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:35:24.618235 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:35:24.618245 systemd[1]: Stopped verity-setup.service. Jul 10 23:35:24.618255 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:35:24.618267 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:35:24.618277 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:35:24.618287 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:35:24.618297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:35:24.618308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:35:24.618320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:35:24.618330 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:35:24.618340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:35:24.618350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:35:24.618361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:35:24.618371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:35:24.618426 kernel: fuse: init (API version 7.39) Jul 10 23:35:24.618439 kernel: loop: module loaded Jul 10 23:35:24.618449 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:35:24.618459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:24.618470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:35:24.618481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:35:24.618491 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:35:24.618511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:35:24.618525 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:35:24.618537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:35:24.618549 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:35:24.618560 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:35:24.618571 kernel: ACPI: bus type drm_connector registered Jul 10 23:35:24.618582 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:35:24.618592 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:35:24.618602 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:35:24.618613 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:35:24.618625 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:35:24.618636 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:35:24.618647 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:35:24.618659 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:35:24.618670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:24.618727 systemd-journald[1118]: Collecting audit messages is disabled. Jul 10 23:35:24.618750 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:35:24.618764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:35:24.618777 systemd-journald[1118]: Journal started Jul 10 23:35:24.618799 systemd-journald[1118]: Runtime Journal (/run/log/journal/f3a53f3557ee47d5a169ac7de2447047) is 8M, max 76.6M, 68.6M free. Jul 10 23:35:24.280216 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:35:24.291337 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 10 23:35:24.292040 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:35:24.630917 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:35:24.630968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:35:24.634522 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:35:24.639057 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:35:24.642323 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:35:24.644710 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:35:24.651565 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:35:24.653929 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:35:24.657085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:24.658686 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:35:24.667802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:35:24.703822 kernel: loop0: detected capacity change from 0 to 211168 Jul 10 23:35:24.704283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:35:24.705295 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:35:24.717578 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:35:24.728297 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:35:24.744576 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:35:24.750755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:35:24.764065 systemd-journald[1118]: Time spent on flushing to /var/log/journal/f3a53f3557ee47d5a169ac7de2447047 is 38.321ms for 1144 entries. Jul 10 23:35:24.764065 systemd-journald[1118]: System Journal (/var/log/journal/f3a53f3557ee47d5a169ac7de2447047) is 8M, max 584.8M, 576.8M free. Jul 10 23:35:24.819690 systemd-journald[1118]: Received client request to flush runtime journal. Jul 10 23:35:24.819742 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:35:24.819756 kernel: loop1: detected capacity change from 0 to 123192 Jul 10 23:35:24.766298 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 23:35:24.770260 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:35:24.793366 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 23:35:24.822406 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:35:24.852471 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:35:24.856531 kernel: loop2: detected capacity change from 0 to 113512 Jul 10 23:35:24.866354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:35:24.917194 kernel: loop3: detected capacity change from 0 to 8 Jul 10 23:35:24.937558 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 10 23:35:24.938933 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 10 23:35:24.950539 kernel: loop4: detected capacity change from 0 to 211168 Jul 10 23:35:24.955748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:35:24.990534 kernel: loop5: detected capacity change from 0 to 123192 Jul 10 23:35:25.019533 kernel: loop6: detected capacity change from 0 to 113512 Jul 10 23:35:25.043536 kernel: loop7: detected capacity change from 0 to 8 Jul 10 23:35:25.047032 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 10 23:35:25.047722 (sd-merge)[1198]: Merged extensions into '/usr'. Jul 10 23:35:25.056168 systemd[1]: Reload requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:35:25.056187 systemd[1]: Reloading... Jul 10 23:35:25.230530 zram_generator::config[1227]: No configuration found. Jul 10 23:35:25.310058 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:35:25.385227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:25.449551 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:35:25.449816 systemd[1]: Reloading finished in 390 ms. Jul 10 23:35:25.472579 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:35:25.473932 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:35:25.488916 systemd[1]: Starting ensure-sysext.service... Jul 10 23:35:25.492876 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:35:25.508031 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:35:25.521894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:35:25.531607 systemd[1]: Reload requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:35:25.531630 systemd[1]: Reloading... Jul 10 23:35:25.533974 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:35:25.534209 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:35:25.537724 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:35:25.538018 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 10 23:35:25.538064 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 10 23:35:25.544145 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:35:25.544566 systemd-tmpfiles[1265]: Skipping /boot Jul 10 23:35:25.568533 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:35:25.568741 systemd-tmpfiles[1265]: Skipping /boot Jul 10 23:35:25.578941 systemd-udevd[1268]: Using default interface naming scheme 'v255'. Jul 10 23:35:25.654531 zram_generator::config[1295]: No configuration found. Jul 10 23:35:25.858254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:25.924788 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 23:35:25.933789 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:35:25.934163 systemd[1]: Reloading finished in 402 ms. Jul 10 23:35:25.943103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:35:25.944322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:35:25.969610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:35:25.982733 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:35:25.988862 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:35:25.994750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:35:26.000848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:35:26.006011 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:35:26.015214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:26.027857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:35:26.031912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:35:26.037903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:35:26.038877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:26.039023 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:26.041930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:26.042114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:26.042196 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:26.045860 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:35:26.050773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:26.054706 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:35:26.055456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:26.055521 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:26.058698 systemd[1]: Finished ensure-sysext.service. Jul 10 23:35:26.059585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:35:26.059775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:35:26.071912 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 23:35:26.086869 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:35:26.103068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:35:26.103293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:35:26.104753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:35:26.119547 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:35:26.120507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:35:26.122680 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:35:26.122959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:35:26.126223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:35:26.128538 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1298) Jul 10 23:35:26.148082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:35:26.166842 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:35:26.178532 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:35:26.179798 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:35:26.192386 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 10 23:35:26.192474 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 10 23:35:26.192486 kernel: [drm] features: -context_init Jul 10 23:35:26.192537 kernel: [drm] number of scanouts: 1 Jul 10 23:35:26.196603 kernel: [drm] number of cap sets: 0 Jul 10 23:35:26.202873 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 10 23:35:26.204629 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:35:26.207168 augenrules[1412]: No rules Jul 10 23:35:26.210675 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:35:26.210925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:35:26.223575 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 23:35:26.240455 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:35:26.242525 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 10 23:35:26.246217 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 10 23:35:26.246353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:35:26.258773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:35:26.262245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:35:26.267796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:35:26.268561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:35:26.268619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:35:26.268647 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:35:26.277918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:35:26.278142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:35:26.285883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:35:26.286554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:35:26.288521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:35:26.310942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:35:26.311200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:35:26.318522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:35:26.333824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 10 23:35:26.350189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:35:26.391876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:26.402881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:35:26.420911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:35:26.422621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:26.432750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:35:26.440184 systemd-resolved[1375]: Positive Trust Anchors: Jul 10 23:35:26.440210 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:35:26.440245 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:35:26.441955 systemd-networkd[1374]: lo: Link UP Jul 10 23:35:26.441960 systemd-networkd[1374]: lo: Gained carrier Jul 10 23:35:26.444211 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 23:35:26.445240 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:35:26.446307 systemd-networkd[1374]: Enumeration completed Jul 10 23:35:26.446457 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:35:26.448666 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:26.448670 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:26.449267 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:26.449271 systemd-networkd[1374]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:35:26.449616 systemd-timesyncd[1390]: No network connectivity, watching for changes. Jul 10 23:35:26.451911 systemd-networkd[1374]: eth0: Link UP Jul 10 23:35:26.451923 systemd-networkd[1374]: eth0: Gained carrier Jul 10 23:35:26.451944 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:26.455786 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:35:26.456187 systemd-resolved[1375]: Using system hostname 'ci-4230-2-1-n-56a4dae949'. Jul 10 23:35:26.456636 systemd-networkd[1374]: eth1: Link UP Jul 10 23:35:26.456641 systemd-networkd[1374]: eth1: Gained carrier Jul 10 23:35:26.456664 systemd-networkd[1374]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:35:26.467334 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:35:26.470341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:35:26.471656 systemd[1]: Reached target network.target - Network. Jul 10 23:35:26.472099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:35:26.484795 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:35:26.489695 systemd-networkd[1374]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:35:26.490534 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jul 10 23:35:26.525641 systemd-networkd[1374]: eth0: DHCPv4 address 49.13.217.224/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 10 23:35:26.526966 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jul 10 23:35:26.527979 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jul 10 23:35:26.532234 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 23:35:26.539777 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 23:35:26.541267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:35:26.555582 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:35:26.581422 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 23:35:26.583902 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:35:26.584712 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:35:26.585838 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:35:26.586842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:35:26.587822 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:35:26.588461 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:35:26.589249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:35:26.589917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:35:26.589957 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:35:26.590444 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:35:26.593014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:35:26.595830 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:35:26.600286 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:35:26.601428 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:35:26.602227 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:35:26.605858 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:35:26.607192 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:35:26.614965 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 23:35:26.616952 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:35:26.617968 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:35:26.618741 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:35:26.619442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:35:26.619479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:35:26.621547 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:35:26.621685 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:35:26.626879 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 23:35:26.632763 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:35:26.637028 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:35:26.640756 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:35:26.642661 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:35:26.644077 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:35:26.657768 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:35:26.660954 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 10 23:35:26.668061 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:35:26.672954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:35:26.675722 coreos-metadata[1459]: Jul 10 23:35:26.674 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 10 23:35:26.688524 coreos-metadata[1459]: Jul 10 23:35:26.684 INFO Fetch successful Jul 10 23:35:26.688524 coreos-metadata[1459]: Jul 10 23:35:26.684 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 10 23:35:26.688524 coreos-metadata[1459]: Jul 10 23:35:26.685 INFO Fetch successful Jul 10 23:35:26.693224 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:35:26.697018 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:35:26.697647 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:35:26.701334 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:35:26.706781 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:35:26.709847 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 23:35:26.717163 jq[1461]: false Jul 10 23:35:26.724103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:35:26.725642 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:35:26.736149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:35:26.738626 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:35:26.739161 dbus-daemon[1460]: [system] SELinux support is enabled Jul 10 23:35:26.739712 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:35:26.743879 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:35:26.743946 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:35:26.745469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:35:26.745529 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:35:26.755099 extend-filesystems[1462]: Found loop4 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found loop5 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found loop6 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found loop7 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda1 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda2 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda3 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found usr Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda4 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda6 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda7 Jul 10 23:35:26.758255 extend-filesystems[1462]: Found sda9 Jul 10 23:35:26.758255 extend-filesystems[1462]: Checking size of /dev/sda9 Jul 10 23:35:26.783335 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:35:26.783642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:35:26.794079 jq[1474]: true Jul 10 23:35:26.805171 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:35:26.823537 tar[1480]: linux-arm64/LICENSE Jul 10 23:35:26.823537 tar[1480]: linux-arm64/helm Jul 10 23:35:26.826812 extend-filesystems[1462]: Resized partition /dev/sda9 Jul 10 23:35:26.837112 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Jul 10 23:35:26.844422 update_engine[1472]: I20250710 23:35:26.841611 1472 main.cc:92] Flatcar Update Engine starting Jul 10 23:35:26.852588 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 10 23:35:26.856048 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:35:26.858682 update_engine[1472]: I20250710 23:35:26.858388 1472 update_check_scheduler.cc:74] Next update check in 6m45s Jul 10 23:35:26.859705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:35:26.872334 jq[1498]: true Jul 10 23:35:26.878119 systemd-logind[1470]: New seat seat0. Jul 10 23:35:26.889240 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 23:35:26.889298 systemd-logind[1470]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 10 23:35:26.890717 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:35:26.942524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1309) Jul 10 23:35:26.985601 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 23:35:26.987902 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:35:27.110722 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:35:27.115899 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:35:27.125515 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 10 23:35:27.135807 systemd[1]: Starting sshkeys.service... Jul 10 23:35:27.138128 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:35:27.153660 extend-filesystems[1506]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 10 23:35:27.153660 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 10 23:35:27.153660 extend-filesystems[1506]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 10 23:35:27.160018 extend-filesystems[1462]: Resized filesystem in /dev/sda9 Jul 10 23:35:27.160018 extend-filesystems[1462]: Found sr0 Jul 10 23:35:27.155434 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:35:27.156135 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:35:27.170329 containerd[1496]: time="2025-07-10T23:35:27.170205720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 10 23:35:27.176948 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 23:35:27.185892 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 23:35:27.222518 coreos-metadata[1545]: Jul 10 23:35:27.221 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 10 23:35:27.224441 coreos-metadata[1545]: Jul 10 23:35:27.223 INFO Fetch successful Jul 10 23:35:27.228621 unknown[1545]: wrote ssh authorized keys file for user: core Jul 10 23:35:27.251945 containerd[1496]: time="2025-07-10T23:35:27.251893840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.258841 containerd[1496]: time="2025-07-10T23:35:27.258786960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:27.258968 containerd[1496]: time="2025-07-10T23:35:27.258953000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 23:35:27.259023 containerd[1496]: time="2025-07-10T23:35:27.259011600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259305880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259331640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259465520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259482680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259816600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259836480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259851200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259860000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.259953720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.260156400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261356 containerd[1496]: time="2025-07-10T23:35:27.260281040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:35:27.261673 containerd[1496]: time="2025-07-10T23:35:27.260294280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 23:35:27.261673 containerd[1496]: time="2025-07-10T23:35:27.260392760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 23:35:27.261673 containerd[1496]: time="2025-07-10T23:35:27.260446880Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:35:27.271076 containerd[1496]: time="2025-07-10T23:35:27.271027360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 23:35:27.271462 containerd[1496]: time="2025-07-10T23:35:27.271445280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 23:35:27.272037 containerd[1496]: time="2025-07-10T23:35:27.272016840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 23:35:27.272148 containerd[1496]: time="2025-07-10T23:35:27.272132520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 23:35:27.272204 containerd[1496]: time="2025-07-10T23:35:27.272192160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 23:35:27.272473 containerd[1496]: time="2025-07-10T23:35:27.272450520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 23:35:27.273606 containerd[1496]: time="2025-07-10T23:35:27.273577920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 23:35:27.274361 containerd[1496]: time="2025-07-10T23:35:27.274338400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 23:35:27.274459 containerd[1496]: time="2025-07-10T23:35:27.274445680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 23:35:27.274558 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274831800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274862400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274877200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274890360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274906680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274921920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274936240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274948440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274962000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274984440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.274996880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.275010800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.275023880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.277745 containerd[1496]: time="2025-07-10T23:35:27.275035600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275049760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275061960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275077400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275090560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275108080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275119800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275131880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275144600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275159520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275182960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275197920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.275209800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.277030880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 23:35:27.278084 containerd[1496]: time="2025-07-10T23:35:27.277107120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277122320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277134880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277149040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277165040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277179120Z" level=info msg="NRI interface is disabled by configuration." Jul 10 23:35:27.278324 containerd[1496]: time="2025-07-10T23:35:27.277189560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 23:35:27.278473 containerd[1496]: time="2025-07-10T23:35:27.277683000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 23:35:27.278473 containerd[1496]: time="2025-07-10T23:35:27.277739120Z" level=info msg="Connect containerd service" Jul 10 23:35:27.278473 containerd[1496]: time="2025-07-10T23:35:27.277784560Z" level=info msg="using legacy CRI server" Jul 10 23:35:27.278473 containerd[1496]: time="2025-07-10T23:35:27.277791800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:35:27.278473 containerd[1496]: time="2025-07-10T23:35:27.278060760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.278934960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279143040Z" level=info msg="Start subscribing containerd event" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279297600Z" level=info msg="Start recovering state" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279430560Z" level=info msg="Start event monitor" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279445800Z" level=info msg="Start snapshots syncer" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279455400Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.279463480Z" level=info msg="Start streaming server" Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.280206920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.280246160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:35:27.280848 containerd[1496]: time="2025-07-10T23:35:27.280291320Z" level=info msg="containerd successfully booted in 0.118388s" Jul 10 23:35:27.280104 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 23:35:27.283049 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:35:27.291195 systemd[1]: Finished sshkeys.service. Jul 10 23:35:27.587992 tar[1480]: linux-arm64/README.md Jul 10 23:35:27.601700 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:35:27.616640 systemd-networkd[1374]: eth1: Gained IPv6LL Jul 10 23:35:27.618250 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jul 10 23:35:27.622096 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:35:27.625904 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:35:27.634658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:27.646323 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:35:27.677324 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:35:27.899744 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:35:27.923895 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:35:27.932995 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:35:27.946601 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:35:27.947007 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:35:27.955159 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:35:27.966566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:35:27.975591 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:35:27.978358 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:35:27.980193 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:35:28.148339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:35:28.156204 systemd[1]: Started sshd@0-49.13.217.224:22-103.99.206.83:46516.service - OpenSSH per-connection server daemon (103.99.206.83:46516). Jul 10 23:35:28.384796 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 10 23:35:28.385420 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jul 10 23:35:28.506764 sshd[1588]: Connection closed by 103.99.206.83 port 46516 [preauth] Jul 10 23:35:28.508108 systemd[1]: sshd@0-49.13.217.224:22-103.99.206.83:46516.service: Deactivated successfully. Jul 10 23:35:28.575890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:28.576821 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:28.582867 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:35:28.592051 systemd[1]: Startup finished in 824ms (kernel) + 5.935s (initrd) + 4.999s (userspace) = 11.759s. Jul 10 23:35:29.204781 kubelet[1597]: E0710 23:35:29.204723 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:29.208558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:29.208764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:29.209603 systemd[1]: kubelet.service: Consumed 987ms CPU time, 257.4M memory peak. Jul 10 23:35:39.459338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:35:39.466992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:39.593801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:39.596314 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:39.649432 kubelet[1615]: E0710 23:35:39.649195 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:39.653059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:39.653238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:39.654213 systemd[1]: kubelet.service: Consumed 159ms CPU time, 105.5M memory peak. Jul 10 23:35:49.769021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:35:49.774740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:49.923743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:49.924535 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:49.974115 kubelet[1629]: E0710 23:35:49.974032 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:49.976758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:49.976943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:49.977291 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107.2M memory peak. Jul 10 23:35:58.993058 systemd-resolved[1375]: Clock change detected. Flushing caches. Jul 10 23:35:58.993228 systemd-timesyncd[1390]: Contacted time server 195.201.173.232:123 (2.flatcar.pool.ntp.org). Jul 10 23:35:58.993331 systemd-timesyncd[1390]: Initial clock synchronization to Thu 2025-07-10 23:35:58.992943 UTC. Jul 10 23:36:00.455874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 23:36:00.465617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:00.608530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:00.608717 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:36:00.658441 kubelet[1645]: E0710 23:36:00.658367 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:36:00.662268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:36:00.662450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:36:00.663207 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107.2M memory peak. Jul 10 23:36:01.059521 systemd[1]: Started sshd@1-49.13.217.224:22-139.178.89.65:38340.service - OpenSSH per-connection server daemon (139.178.89.65:38340). Jul 10 23:36:02.057250 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 38340 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:02.061674 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:02.078461 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:36:02.084672 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:36:02.091476 systemd-logind[1470]: New session 1 of user core. Jul 10 23:36:02.103626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:36:02.113648 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:36:02.117874 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:36:02.121523 systemd-logind[1470]: New session c1 of user core. Jul 10 23:36:02.272009 systemd[1657]: Queued start job for default target default.target. Jul 10 23:36:02.280319 systemd[1657]: Created slice app.slice - User Application Slice. Jul 10 23:36:02.280597 systemd[1657]: Reached target paths.target - Paths. Jul 10 23:36:02.280799 systemd[1657]: Reached target timers.target - Timers. Jul 10 23:36:02.284281 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:36:02.298683 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:36:02.298869 systemd[1657]: Reached target sockets.target - Sockets. Jul 10 23:36:02.298936 systemd[1657]: Reached target basic.target - Basic System. Jul 10 23:36:02.298991 systemd[1657]: Reached target default.target - Main User Target. Jul 10 23:36:02.299071 systemd[1657]: Startup finished in 168ms. Jul 10 23:36:02.299125 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:36:02.311720 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:36:03.009752 systemd[1]: Started sshd@2-49.13.217.224:22-139.178.89.65:38350.service - OpenSSH per-connection server daemon (139.178.89.65:38350). Jul 10 23:36:04.006823 sshd[1668]: Accepted publickey for core from 139.178.89.65 port 38350 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:04.009226 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:04.016168 systemd-logind[1470]: New session 2 of user core. Jul 10 23:36:04.028615 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:36:04.684370 sshd[1670]: Connection closed by 139.178.89.65 port 38350 Jul 10 23:36:04.686389 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:04.690872 systemd[1]: sshd@2-49.13.217.224:22-139.178.89.65:38350.service: Deactivated successfully. Jul 10 23:36:04.693219 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 23:36:04.700974 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Jul 10 23:36:04.704528 systemd-logind[1470]: Removed session 2. Jul 10 23:36:04.873758 systemd[1]: Started sshd@3-49.13.217.224:22-139.178.89.65:38362.service - OpenSSH per-connection server daemon (139.178.89.65:38362). Jul 10 23:36:05.873589 sshd[1676]: Accepted publickey for core from 139.178.89.65 port 38362 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:05.876388 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:05.887862 systemd-logind[1470]: New session 3 of user core. Jul 10 23:36:05.893767 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:36:06.555518 sshd[1678]: Connection closed by 139.178.89.65 port 38362 Jul 10 23:36:06.556666 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:06.561752 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Jul 10 23:36:06.562596 systemd[1]: sshd@3-49.13.217.224:22-139.178.89.65:38362.service: Deactivated successfully. Jul 10 23:36:06.565018 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 23:36:06.568611 systemd-logind[1470]: Removed session 3. Jul 10 23:36:06.736859 systemd[1]: Started sshd@4-49.13.217.224:22-139.178.89.65:38368.service - OpenSSH per-connection server daemon (139.178.89.65:38368). Jul 10 23:36:07.760595 sshd[1684]: Accepted publickey for core from 139.178.89.65 port 38368 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:07.762994 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:07.767869 systemd-logind[1470]: New session 4 of user core. Jul 10 23:36:07.778553 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:36:08.455929 sshd[1686]: Connection closed by 139.178.89.65 port 38368 Jul 10 23:36:08.456931 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:08.462526 systemd[1]: sshd@4-49.13.217.224:22-139.178.89.65:38368.service: Deactivated successfully. Jul 10 23:36:08.465627 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:36:08.466793 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:36:08.467928 systemd-logind[1470]: Removed session 4. Jul 10 23:36:08.646967 systemd[1]: Started sshd@5-49.13.217.224:22-139.178.89.65:38372.service - OpenSSH per-connection server daemon (139.178.89.65:38372). Jul 10 23:36:09.630790 sshd[1692]: Accepted publickey for core from 139.178.89.65 port 38372 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:09.633590 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:09.639607 systemd-logind[1470]: New session 5 of user core. Jul 10 23:36:09.650440 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:36:10.164013 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:36:10.164782 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:36:10.180475 sudo[1695]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:10.341433 sshd[1694]: Connection closed by 139.178.89.65 port 38372 Jul 10 23:36:10.342377 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:10.350783 systemd[1]: sshd@5-49.13.217.224:22-139.178.89.65:38372.service: Deactivated successfully. Jul 10 23:36:10.354977 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:36:10.356926 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:36:10.358114 systemd-logind[1470]: Removed session 5. Jul 10 23:36:10.537942 systemd[1]: Started sshd@6-49.13.217.224:22-139.178.89.65:49022.service - OpenSSH per-connection server daemon (139.178.89.65:49022). Jul 10 23:36:10.705655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 23:36:10.712605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:10.835433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:10.847414 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:36:10.899587 kubelet[1711]: E0710 23:36:10.898700 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:36:10.903446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:36:10.903567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:36:10.904001 systemd[1]: kubelet.service: Consumed 160ms CPU time, 105.3M memory peak. Jul 10 23:36:11.546696 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 49022 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:11.548230 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:11.555633 systemd-logind[1470]: New session 6 of user core. Jul 10 23:36:11.564586 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:36:12.079982 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:36:12.080760 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:36:12.085353 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:12.093227 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:36:12.093532 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:36:12.111860 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:36:12.142624 augenrules[1742]: No rules Jul 10 23:36:12.144645 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:36:12.144877 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:36:12.146562 sudo[1719]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:12.270356 update_engine[1472]: I20250710 23:36:12.269896 1472 update_attempter.cc:509] Updating boot flags... Jul 10 23:36:12.311527 sshd[1718]: Connection closed by 139.178.89.65 port 49022 Jul 10 23:36:12.310496 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:12.319321 systemd[1]: sshd@6-49.13.217.224:22-139.178.89.65:49022.service: Deactivated successfully. Jul 10 23:36:12.323153 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:36:12.329392 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:36:12.343273 systemd-logind[1470]: Removed session 6. Jul 10 23:36:12.345735 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1756) Jul 10 23:36:12.488694 systemd[1]: Started sshd@7-49.13.217.224:22-139.178.89.65:49038.service - OpenSSH per-connection server daemon (139.178.89.65:49038). Jul 10 23:36:13.486688 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 49038 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:36:13.489163 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:36:13.494748 systemd-logind[1470]: New session 7 of user core. Jul 10 23:36:13.499481 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:36:14.015397 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:36:14.015775 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:36:14.377177 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:36:14.377594 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:36:14.628101 dockerd[1786]: time="2025-07-10T23:36:14.627924703Z" level=info msg="Starting up" Jul 10 23:36:14.722105 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2257609292-merged.mount: Deactivated successfully. Jul 10 23:36:14.750810 dockerd[1786]: time="2025-07-10T23:36:14.750548543Z" level=info msg="Loading containers: start." Jul 10 23:36:14.924358 kernel: Initializing XFRM netlink socket Jul 10 23:36:15.024741 systemd-networkd[1374]: docker0: Link UP Jul 10 23:36:15.056780 dockerd[1786]: time="2025-07-10T23:36:15.056659863Z" level=info msg="Loading containers: done." Jul 10 23:36:15.075191 dockerd[1786]: time="2025-07-10T23:36:15.074350063Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:36:15.075191 dockerd[1786]: time="2025-07-10T23:36:15.074500223Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 10 23:36:15.075191 dockerd[1786]: time="2025-07-10T23:36:15.074744063Z" level=info msg="Daemon has completed initialization" Jul 10 23:36:15.122222 dockerd[1786]: time="2025-07-10T23:36:15.122135383Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:36:15.123046 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:36:15.932862 containerd[1496]: time="2025-07-10T23:36:15.932614943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 23:36:16.582955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675565537.mount: Deactivated successfully. Jul 10 23:36:17.937149 containerd[1496]: time="2025-07-10T23:36:17.936994543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:17.939827 containerd[1496]: time="2025-07-10T23:36:17.939726103Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351808" Jul 10 23:36:17.942201 containerd[1496]: time="2025-07-10T23:36:17.942113263Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:17.946938 containerd[1496]: time="2025-07-10T23:36:17.946863783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:17.948937 containerd[1496]: time="2025-07-10T23:36:17.948752383Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.0160928s" Jul 10 23:36:17.948937 containerd[1496]: time="2025-07-10T23:36:17.948800983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 23:36:17.951202 containerd[1496]: time="2025-07-10T23:36:17.951158103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 23:36:20.059275 containerd[1496]: time="2025-07-10T23:36:20.059184463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:20.061284 containerd[1496]: time="2025-07-10T23:36:20.061066503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537643" Jul 10 23:36:20.062860 containerd[1496]: time="2025-07-10T23:36:20.062787023Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:20.066917 containerd[1496]: time="2025-07-10T23:36:20.066844503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:20.069768 containerd[1496]: time="2025-07-10T23:36:20.069548743Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 2.11824976s" Jul 10 23:36:20.069768 containerd[1496]: time="2025-07-10T23:36:20.069620663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 23:36:20.070775 containerd[1496]: time="2025-07-10T23:36:20.070709783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 23:36:20.955556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 10 23:36:20.968818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:21.114669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:21.117990 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:36:21.166378 kubelet[2037]: E0710 23:36:21.166324 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:36:21.169640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:36:21.169788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:36:21.171439 systemd[1]: kubelet.service: Consumed 165ms CPU time, 109.2M memory peak. Jul 10 23:36:21.656686 containerd[1496]: time="2025-07-10T23:36:21.656623423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:21.659284 containerd[1496]: time="2025-07-10T23:36:21.658631143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293535" Jul 10 23:36:21.660134 containerd[1496]: time="2025-07-10T23:36:21.659648823Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:21.664546 containerd[1496]: time="2025-07-10T23:36:21.664463983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:21.666456 containerd[1496]: time="2025-07-10T23:36:21.666105623Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.5953394s" Jul 10 23:36:21.666456 containerd[1496]: time="2025-07-10T23:36:21.666156303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 23:36:21.667058 containerd[1496]: time="2025-07-10T23:36:21.666796263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 23:36:22.783531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684394106.mount: Deactivated successfully. Jul 10 23:36:23.213587 containerd[1496]: time="2025-07-10T23:36:23.213487703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:23.215446 containerd[1496]: time="2025-07-10T23:36:23.215343823Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199498" Jul 10 23:36:23.217353 containerd[1496]: time="2025-07-10T23:36:23.217258903Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:23.220565 containerd[1496]: time="2025-07-10T23:36:23.220492303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:23.221425 containerd[1496]: time="2025-07-10T23:36:23.221287583Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.5544552s" Jul 10 23:36:23.221425 containerd[1496]: time="2025-07-10T23:36:23.221323743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 23:36:23.221858 containerd[1496]: time="2025-07-10T23:36:23.221817103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 23:36:23.862157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255887217.mount: Deactivated successfully. Jul 10 23:36:24.769646 containerd[1496]: time="2025-07-10T23:36:24.769564463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:24.771699 containerd[1496]: time="2025-07-10T23:36:24.771385383Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Jul 10 23:36:24.773159 containerd[1496]: time="2025-07-10T23:36:24.773111983Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:24.778456 containerd[1496]: time="2025-07-10T23:36:24.778388623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:24.781338 containerd[1496]: time="2025-07-10T23:36:24.780410103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.55854844s" Jul 10 23:36:24.781338 containerd[1496]: time="2025-07-10T23:36:24.780456743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 23:36:24.781338 containerd[1496]: time="2025-07-10T23:36:24.781081703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:36:25.362814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544845554.mount: Deactivated successfully. Jul 10 23:36:25.372814 containerd[1496]: time="2025-07-10T23:36:25.372667703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:25.374496 containerd[1496]: time="2025-07-10T23:36:25.374039463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 10 23:36:25.375946 containerd[1496]: time="2025-07-10T23:36:25.375823743Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:25.378868 containerd[1496]: time="2025-07-10T23:36:25.378798183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:25.380699 containerd[1496]: time="2025-07-10T23:36:25.379932223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.817ms" Jul 10 23:36:25.380699 containerd[1496]: time="2025-07-10T23:36:25.380009983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:36:25.381496 containerd[1496]: time="2025-07-10T23:36:25.381417143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 23:36:25.971310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141030074.mount: Deactivated successfully. Jul 10 23:36:27.751703 containerd[1496]: time="2025-07-10T23:36:27.751581823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:27.753321 containerd[1496]: time="2025-07-10T23:36:27.753257583Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334637" Jul 10 23:36:27.755199 containerd[1496]: time="2025-07-10T23:36:27.755144023Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:27.761479 containerd[1496]: time="2025-07-10T23:36:27.761413903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:27.766164 containerd[1496]: time="2025-07-10T23:36:27.766082103Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.38460776s" Jul 10 23:36:27.766164 containerd[1496]: time="2025-07-10T23:36:27.766148903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 23:36:31.205391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 10 23:36:31.213568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:31.349455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:31.358853 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:36:31.409238 kubelet[2196]: E0710 23:36:31.405850 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:36:31.408952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:36:31.409143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:36:31.410444 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106.9M memory peak. Jul 10 23:36:34.865103 systemd[1]: Started sshd@8-49.13.217.224:22-103.99.206.83:40840.service - OpenSSH per-connection server daemon (103.99.206.83:40840). Jul 10 23:36:35.034710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:35.035480 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106.9M memory peak. Jul 10 23:36:35.051055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:35.094275 systemd[1]: Reload requested from client PID 2213 ('systemctl') (unit session-7.scope)... Jul 10 23:36:35.094295 systemd[1]: Reloading... Jul 10 23:36:35.204435 sshd[2204]: Connection closed by 103.99.206.83 port 40840 [preauth] Jul 10 23:36:35.240277 zram_generator::config[2261]: No configuration found. Jul 10 23:36:35.363963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:36:35.459173 systemd[1]: Reloading finished in 364 ms. Jul 10 23:36:35.477638 systemd[1]: sshd@8-49.13.217.224:22-103.99.206.83:40840.service: Deactivated successfully. Jul 10 23:36:35.521029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:35.541707 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:36:35.547126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:35.547594 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:36:35.547945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:35.548034 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.9M memory peak. Jul 10 23:36:35.555630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:35.740593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:35.741009 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:36:35.796279 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:35.796279 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:36:35.796279 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:35.796279 kubelet[2313]: I0710 23:36:35.795258 2313 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:36:37.024746 kubelet[2313]: I0710 23:36:37.024701 2313 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:36:37.025177 kubelet[2313]: I0710 23:36:37.025162 2313 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:36:37.025532 kubelet[2313]: I0710 23:36:37.025515 2313 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:36:37.050109 kubelet[2313]: E0710 23:36:37.050047 2313 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.217.224:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 23:36:37.054271 kubelet[2313]: I0710 23:36:37.053303 2313 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:36:37.067210 kubelet[2313]: E0710 23:36:37.067157 2313 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:36:37.067440 kubelet[2313]: I0710 23:36:37.067424 2313 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:36:37.070889 kubelet[2313]: I0710 23:36:37.070843 2313 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:36:37.071439 kubelet[2313]: I0710 23:36:37.071402 2313 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:36:37.071762 kubelet[2313]: I0710 23:36:37.071579 2313 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-n-56a4dae949","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:36:37.071980 kubelet[2313]: I0710 23:36:37.071964 2313 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:36:37.072067 kubelet[2313]: I0710 23:36:37.072057 2313 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:36:37.072364 kubelet[2313]: I0710 23:36:37.072341 2313 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:37.076922 kubelet[2313]: I0710 23:36:37.076866 2313 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:36:37.077118 kubelet[2313]: I0710 23:36:37.077104 2313 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:36:37.077221 kubelet[2313]: I0710 23:36:37.077209 2313 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:36:37.079820 kubelet[2313]: I0710 23:36:37.079784 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:36:37.084674 kubelet[2313]: E0710 23:36:37.084626 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.217.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-n-56a4dae949&limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:36:37.085527 kubelet[2313]: E0710 23:36:37.085323 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.217.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:36:37.085711 kubelet[2313]: I0710 23:36:37.085683 2313 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:36:37.086658 kubelet[2313]: I0710 23:36:37.086608 2313 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:36:37.086785 kubelet[2313]: W0710 23:36:37.086746 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:36:37.093305 kubelet[2313]: I0710 23:36:37.091306 2313 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:36:37.093305 kubelet[2313]: I0710 23:36:37.091363 2313 server.go:1289] "Started kubelet" Jul 10 23:36:37.093305 kubelet[2313]: I0710 23:36:37.092039 2313 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:36:37.093305 kubelet[2313]: I0710 23:36:37.093213 2313 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:36:37.094583 kubelet[2313]: I0710 23:36:37.094201 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:36:37.094971 kubelet[2313]: I0710 23:36:37.094949 2313 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:36:37.097153 kubelet[2313]: E0710 23:36:37.095213 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.217.224:6443/api/v1/namespaces/default/events\": dial tcp 49.13.217.224:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-1-n-56a4dae949.1851081086688172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-n-56a4dae949,UID:ci-4230-2-1-n-56a4dae949,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-n-56a4dae949,},FirstTimestamp:2025-07-10 23:36:37.09132837 +0000 UTC m=+1.342673027,LastTimestamp:2025-07-10 23:36:37.09132837 +0000 UTC m=+1.342673027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-n-56a4dae949,}" Jul 10 23:36:37.100886 kubelet[2313]: I0710 23:36:37.100640 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:36:37.104726 kubelet[2313]: I0710 23:36:37.103056 2313 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:36:37.104726 kubelet[2313]: I0710 23:36:37.103170 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:36:37.104726 kubelet[2313]: E0710 23:36:37.104306 2313 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:37.105702 kubelet[2313]: I0710 23:36:37.105664 2313 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:36:37.105842 kubelet[2313]: I0710 23:36:37.105824 2313 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:36:37.106731 kubelet[2313]: E0710 23:36:37.106697 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.217.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-n-56a4dae949?timeout=10s\": dial tcp 49.13.217.224:6443: connect: connection refused" interval="200ms" Jul 10 23:36:37.108518 kubelet[2313]: E0710 23:36:37.108480 2313 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:36:37.108818 kubelet[2313]: I0710 23:36:37.108777 2313 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:36:37.111253 kubelet[2313]: I0710 23:36:37.111168 2313 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:36:37.111397 kubelet[2313]: I0710 23:36:37.111356 2313 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:36:37.111397 kubelet[2313]: I0710 23:36:37.111376 2313 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:36:37.141955 kubelet[2313]: E0710 23:36:37.141844 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.217.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:36:37.148102 kubelet[2313]: I0710 23:36:37.148030 2313 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:36:37.148361 kubelet[2313]: I0710 23:36:37.148318 2313 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:36:37.148361 kubelet[2313]: I0710 23:36:37.148355 2313 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:36:37.148361 kubelet[2313]: I0710 23:36:37.148364 2313 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:36:37.148872 kubelet[2313]: E0710 23:36:37.148725 2313 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:36:37.149810 kubelet[2313]: E0710 23:36:37.149497 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.217.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:36:37.150981 kubelet[2313]: I0710 23:36:37.150944 2313 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:36:37.150981 kubelet[2313]: I0710 23:36:37.150962 2313 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:36:37.150981 kubelet[2313]: I0710 23:36:37.150984 2313 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:37.159194 kubelet[2313]: I0710 23:36:37.159125 2313 policy_none.go:49] "None policy: Start" Jul 10 23:36:37.159194 kubelet[2313]: I0710 23:36:37.159180 2313 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:36:37.159194 kubelet[2313]: I0710 23:36:37.159202 2313 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:36:37.168227 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:36:37.188076 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:36:37.193040 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:36:37.205376 kubelet[2313]: E0710 23:36:37.205267 2313 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:36:37.205710 kubelet[2313]: I0710 23:36:37.205672 2313 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:36:37.205793 kubelet[2313]: I0710 23:36:37.205710 2313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:36:37.208142 kubelet[2313]: I0710 23:36:37.207007 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:36:37.209524 kubelet[2313]: E0710 23:36:37.209430 2313 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:36:37.209524 kubelet[2313]: E0710 23:36:37.209492 2313 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:37.269982 systemd[1]: Created slice kubepods-burstable-pod4b590dbfa799512150261657d199c8d1.slice - libcontainer container kubepods-burstable-pod4b590dbfa799512150261657d199c8d1.slice. Jul 10 23:36:37.282861 kubelet[2313]: E0710 23:36:37.282734 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.288458 systemd[1]: Created slice kubepods-burstable-podd586aa89cb912eb5501f0f8affb31f47.slice - libcontainer container kubepods-burstable-podd586aa89cb912eb5501f0f8affb31f47.slice. Jul 10 23:36:37.299908 kubelet[2313]: E0710 23:36:37.299804 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.304048 systemd[1]: Created slice kubepods-burstable-pod32d2a4a486522b0dd115327342c57ee5.slice - libcontainer container kubepods-burstable-pod32d2a4a486522b0dd115327342c57ee5.slice. Jul 10 23:36:37.306964 kubelet[2313]: E0710 23:36:37.306917 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.307492 kubelet[2313]: E0710 23:36:37.307454 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.217.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-n-56a4dae949?timeout=10s\": dial tcp 49.13.217.224:6443: connect: connection refused" interval="400ms" Jul 10 23:36:37.309263 kubelet[2313]: I0710 23:36:37.309218 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.309748 kubelet[2313]: E0710 23:36:37.309693 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.217.224:6443/api/v1/nodes\": dial tcp 49.13.217.224:6443: connect: connection refused" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408692 kubelet[2313]: I0710 23:36:37.408625 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408692 kubelet[2313]: I0710 23:36:37.408689 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408962 kubelet[2313]: I0710 23:36:37.408720 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408962 kubelet[2313]: I0710 23:36:37.408746 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408962 kubelet[2313]: I0710 23:36:37.408774 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32d2a4a486522b0dd115327342c57ee5-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-n-56a4dae949\" (UID: \"32d2a4a486522b0dd115327342c57ee5\") " pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408962 kubelet[2313]: I0710 23:36:37.408802 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.408962 kubelet[2313]: I0710 23:36:37.408824 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.409221 kubelet[2313]: I0710 23:36:37.408859 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.409221 kubelet[2313]: I0710 23:36:37.408917 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.512842 kubelet[2313]: I0710 23:36:37.512334 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.512842 kubelet[2313]: E0710 23:36:37.512796 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.217.224:6443/api/v1/nodes\": dial tcp 49.13.217.224:6443: connect: connection refused" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.587072 containerd[1496]: time="2025-07-10T23:36:37.586848373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-n-56a4dae949,Uid:4b590dbfa799512150261657d199c8d1,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:37.602066 containerd[1496]: time="2025-07-10T23:36:37.601504615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-n-56a4dae949,Uid:d586aa89cb912eb5501f0f8affb31f47,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:37.608962 containerd[1496]: time="2025-07-10T23:36:37.608912674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-n-56a4dae949,Uid:32d2a4a486522b0dd115327342c57ee5,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:37.708733 kubelet[2313]: E0710 23:36:37.708600 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.217.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-n-56a4dae949?timeout=10s\": dial tcp 49.13.217.224:6443: connect: connection refused" interval="800ms" Jul 10 23:36:37.915693 kubelet[2313]: I0710 23:36:37.915474 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:37.916492 kubelet[2313]: E0710 23:36:37.916304 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.217.224:6443/api/v1/nodes\": dial tcp 49.13.217.224:6443: connect: connection refused" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:38.146286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661406079.mount: Deactivated successfully. Jul 10 23:36:38.160993 kubelet[2313]: E0710 23:36:38.160920 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.217.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-n-56a4dae949&limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:36:38.221758 containerd[1496]: time="2025-07-10T23:36:38.221633916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:38.224955 containerd[1496]: time="2025-07-10T23:36:38.224886555Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:38.227394 containerd[1496]: time="2025-07-10T23:36:38.227316564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 10 23:36:38.228327 containerd[1496]: time="2025-07-10T23:36:38.228178513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:36:38.231449 containerd[1496]: time="2025-07-10T23:36:38.231360513Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:38.234444 containerd[1496]: time="2025-07-10T23:36:38.234379234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:38.234832 containerd[1496]: time="2025-07-10T23:36:38.234777269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:36:38.241291 containerd[1496]: time="2025-07-10T23:36:38.240341358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:36:38.243278 containerd[1496]: time="2025-07-10T23:36:38.242156455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 640.495963ms" Jul 10 23:36:38.245994 containerd[1496]: time="2025-07-10T23:36:38.245932007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.725897ms" Jul 10 23:36:38.247129 containerd[1496]: time="2025-07-10T23:36:38.247060353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.066901ms" Jul 10 23:36:38.254341 kubelet[2313]: E0710 23:36:38.254300 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.217.224:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:36:38.304610 kubelet[2313]: E0710 23:36:38.304564 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.217.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:36:38.400699 containerd[1496]: time="2025-07-10T23:36:38.399164940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:38.400957 containerd[1496]: time="2025-07-10T23:36:38.400910718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:38.401585 containerd[1496]: time="2025-07-10T23:36:38.400974757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.402030 containerd[1496]: time="2025-07-10T23:36:38.401846906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.405191 containerd[1496]: time="2025-07-10T23:36:38.403124170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:38.405191 containerd[1496]: time="2025-07-10T23:36:38.403198889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:38.405191 containerd[1496]: time="2025-07-10T23:36:38.403210729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.405191 containerd[1496]: time="2025-07-10T23:36:38.405022506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.406597 containerd[1496]: time="2025-07-10T23:36:38.406437048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:38.406830 containerd[1496]: time="2025-07-10T23:36:38.406593566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:38.406830 containerd[1496]: time="2025-07-10T23:36:38.406616046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.406830 containerd[1496]: time="2025-07-10T23:36:38.406741924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:38.438546 systemd[1]: Started cri-containerd-3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91.scope - libcontainer container 3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91. Jul 10 23:36:38.440166 systemd[1]: Started cri-containerd-6fefc142edc4ede8dbe037089450571fb423e555785fb07c59d8e9d65ebf8314.scope - libcontainer container 6fefc142edc4ede8dbe037089450571fb423e555785fb07c59d8e9d65ebf8314. Jul 10 23:36:38.446777 systemd[1]: Started cri-containerd-c74a923cd530ec94fb951339ebb04ba7861f68e809cffa979389aa234c506ce5.scope - libcontainer container c74a923cd530ec94fb951339ebb04ba7861f68e809cffa979389aa234c506ce5. Jul 10 23:36:38.509534 kubelet[2313]: E0710 23:36:38.509366 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.217.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-n-56a4dae949?timeout=10s\": dial tcp 49.13.217.224:6443: connect: connection refused" interval="1.6s" Jul 10 23:36:38.509534 kubelet[2313]: E0710 23:36:38.509384 2313 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.217.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.217.224:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:36:38.520983 containerd[1496]: time="2025-07-10T23:36:38.520883594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-n-56a4dae949,Uid:d586aa89cb912eb5501f0f8affb31f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91\"" Jul 10 23:36:38.521779 containerd[1496]: time="2025-07-10T23:36:38.521738503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-n-56a4dae949,Uid:4b590dbfa799512150261657d199c8d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fefc142edc4ede8dbe037089450571fb423e555785fb07c59d8e9d65ebf8314\"" Jul 10 23:36:38.531289 containerd[1496]: time="2025-07-10T23:36:38.531172663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-n-56a4dae949,Uid:32d2a4a486522b0dd115327342c57ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74a923cd530ec94fb951339ebb04ba7861f68e809cffa979389aa234c506ce5\"" Jul 10 23:36:38.532187 containerd[1496]: time="2025-07-10T23:36:38.532042692Z" level=info msg="CreateContainer within sandbox \"3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:36:38.536174 containerd[1496]: time="2025-07-10T23:36:38.536128000Z" level=info msg="CreateContainer within sandbox \"6fefc142edc4ede8dbe037089450571fb423e555785fb07c59d8e9d65ebf8314\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:36:38.540752 containerd[1496]: time="2025-07-10T23:36:38.540601503Z" level=info msg="CreateContainer within sandbox \"c74a923cd530ec94fb951339ebb04ba7861f68e809cffa979389aa234c506ce5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:36:38.559720 containerd[1496]: time="2025-07-10T23:36:38.559664461Z" level=info msg="CreateContainer within sandbox \"6fefc142edc4ede8dbe037089450571fb423e555785fb07c59d8e9d65ebf8314\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24b3ba60d9896cee261ba2c0253a5b4e74fc6faf1345c77215c25a9d351e31b7\"" Jul 10 23:36:38.561339 containerd[1496]: time="2025-07-10T23:36:38.561132202Z" level=info msg="StartContainer for \"24b3ba60d9896cee261ba2c0253a5b4e74fc6faf1345c77215c25a9d351e31b7\"" Jul 10 23:36:38.569411 containerd[1496]: time="2025-07-10T23:36:38.569267379Z" level=info msg="CreateContainer within sandbox \"3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a\"" Jul 10 23:36:38.570618 containerd[1496]: time="2025-07-10T23:36:38.570487123Z" level=info msg="StartContainer for \"33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a\"" Jul 10 23:36:38.578895 containerd[1496]: time="2025-07-10T23:36:38.578367303Z" level=info msg="CreateContainer within sandbox \"c74a923cd530ec94fb951339ebb04ba7861f68e809cffa979389aa234c506ce5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0311a390888effde8687f672ba58b9c99084925cfd73619291295dfbf4d26d34\"" Jul 10 23:36:38.580126 containerd[1496]: time="2025-07-10T23:36:38.579904004Z" level=info msg="StartContainer for \"0311a390888effde8687f672ba58b9c99084925cfd73619291295dfbf4d26d34\"" Jul 10 23:36:38.610613 systemd[1]: Started cri-containerd-0311a390888effde8687f672ba58b9c99084925cfd73619291295dfbf4d26d34.scope - libcontainer container 0311a390888effde8687f672ba58b9c99084925cfd73619291295dfbf4d26d34. Jul 10 23:36:38.628478 systemd[1]: Started cri-containerd-24b3ba60d9896cee261ba2c0253a5b4e74fc6faf1345c77215c25a9d351e31b7.scope - libcontainer container 24b3ba60d9896cee261ba2c0253a5b4e74fc6faf1345c77215c25a9d351e31b7. Jul 10 23:36:38.638503 systemd[1]: Started cri-containerd-33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a.scope - libcontainer container 33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a. Jul 10 23:36:38.699337 containerd[1496]: time="2025-07-10T23:36:38.699140329Z" level=info msg="StartContainer for \"0311a390888effde8687f672ba58b9c99084925cfd73619291295dfbf4d26d34\" returns successfully" Jul 10 23:36:38.713155 containerd[1496]: time="2025-07-10T23:36:38.712944073Z" level=info msg="StartContainer for \"24b3ba60d9896cee261ba2c0253a5b4e74fc6faf1345c77215c25a9d351e31b7\" returns successfully" Jul 10 23:36:38.721920 kubelet[2313]: I0710 23:36:38.721489 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:38.721920 kubelet[2313]: E0710 23:36:38.721900 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.217.224:6443/api/v1/nodes\": dial tcp 49.13.217.224:6443: connect: connection refused" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:38.726377 containerd[1496]: time="2025-07-10T23:36:38.725979427Z" level=info msg="StartContainer for \"33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a\" returns successfully" Jul 10 23:36:39.162821 kubelet[2313]: E0710 23:36:39.162320 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:39.166800 kubelet[2313]: E0710 23:36:39.166550 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:39.173265 kubelet[2313]: E0710 23:36:39.172608 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.173350 kubelet[2313]: E0710 23:36:40.173317 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.174614 kubelet[2313]: E0710 23:36:40.173333 2313 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.325392 kubelet[2313]: I0710 23:36:40.323880 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.838400 kubelet[2313]: E0710 23:36:40.838357 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-1-n-56a4dae949\" not found" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.872507 kubelet[2313]: I0710 23:36:40.872286 2313 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:40.872507 kubelet[2313]: E0710 23:36:40.872349 2313 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-1-n-56a4dae949\": node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:40.911617 kubelet[2313]: E0710 23:36:40.911579 2313 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:40.923218 kubelet[2313]: E0710 23:36:40.922935 2313 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-1-n-56a4dae949.1851081086688172 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-n-56a4dae949,UID:ci-4230-2-1-n-56a4dae949,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-n-56a4dae949,},FirstTimestamp:2025-07-10 23:36:37.09132837 +0000 UTC m=+1.342673027,LastTimestamp:2025-07-10 23:36:37.09132837 +0000 UTC m=+1.342673027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-n-56a4dae949,}" Jul 10 23:36:40.996113 kubelet[2313]: E0710 23:36:40.995989 2313 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-1-n-56a4dae949.18510810876ded8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-n-56a4dae949,UID:ci-4230-2-1-n-56a4dae949,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-n-56a4dae949,},FirstTimestamp:2025-07-10 23:36:37.108460938 +0000 UTC m=+1.359805595,LastTimestamp:2025-07-10 23:36:37.108460938 +0000 UTC m=+1.359805595,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-n-56a4dae949,}" Jul 10 23:36:41.012493 kubelet[2313]: E0710 23:36:41.012442 2313 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:41.113351 kubelet[2313]: E0710 23:36:41.112785 2313 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:41.205617 kubelet[2313]: I0710 23:36:41.205566 2313 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:41.221950 kubelet[2313]: E0710 23:36:41.221895 2313 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:41.222457 kubelet[2313]: I0710 23:36:41.221932 2313 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:41.229199 kubelet[2313]: E0710 23:36:41.228981 2313 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:41.229199 kubelet[2313]: I0710 23:36:41.229015 2313 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:41.233071 kubelet[2313]: E0710 23:36:41.233014 2313 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-1-n-56a4dae949\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:42.050329 kubelet[2313]: I0710 23:36:42.049652 2313 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:42.087041 kubelet[2313]: I0710 23:36:42.087002 2313 apiserver.go:52] "Watching apiserver" Jul 10 23:36:42.106090 kubelet[2313]: I0710 23:36:42.106033 2313 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:36:43.227835 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Jul 10 23:36:43.227850 systemd[1]: Reloading... Jul 10 23:36:43.340293 zram_generator::config[2642]: No configuration found. Jul 10 23:36:43.476030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:36:43.604037 systemd[1]: Reloading finished in 375 ms. Jul 10 23:36:43.647676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:43.662077 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:36:43.664279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:43.664713 systemd[1]: kubelet.service: Consumed 1.807s CPU time, 129.3M memory peak. Jul 10 23:36:43.674004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:36:43.835523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:36:43.848609 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:36:43.924057 kubelet[2691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:43.924057 kubelet[2691]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:36:43.924057 kubelet[2691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:36:43.924057 kubelet[2691]: I0710 23:36:43.924023 2691 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:36:43.931167 kubelet[2691]: I0710 23:36:43.931098 2691 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:36:43.931167 kubelet[2691]: I0710 23:36:43.931139 2691 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:36:43.931621 kubelet[2691]: I0710 23:36:43.931492 2691 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:36:43.936886 kubelet[2691]: I0710 23:36:43.936828 2691 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 23:36:43.948051 kubelet[2691]: I0710 23:36:43.947602 2691 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:36:43.951956 kubelet[2691]: E0710 23:36:43.951907 2691 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:36:43.952143 kubelet[2691]: I0710 23:36:43.952131 2691 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:36:43.956818 kubelet[2691]: I0710 23:36:43.956752 2691 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:36:43.957280 kubelet[2691]: I0710 23:36:43.957170 2691 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:36:43.957578 kubelet[2691]: I0710 23:36:43.957229 2691 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-n-56a4dae949","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:36:43.957711 kubelet[2691]: I0710 23:36:43.957582 2691 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:36:43.957711 kubelet[2691]: I0710 23:36:43.957600 2691 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:36:43.957711 kubelet[2691]: I0710 23:36:43.957673 2691 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:43.958188 kubelet[2691]: I0710 23:36:43.958155 2691 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:36:43.958374 kubelet[2691]: I0710 23:36:43.958189 2691 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:36:43.958374 kubelet[2691]: I0710 23:36:43.958229 2691 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:36:43.958374 kubelet[2691]: I0710 23:36:43.958308 2691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:36:43.962313 kubelet[2691]: I0710 23:36:43.962090 2691 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:36:43.965257 kubelet[2691]: I0710 23:36:43.963518 2691 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:36:43.975395 kubelet[2691]: I0710 23:36:43.975361 2691 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:36:43.975526 kubelet[2691]: I0710 23:36:43.975418 2691 server.go:1289] "Started kubelet" Jul 10 23:36:43.980244 kubelet[2691]: I0710 23:36:43.977292 2691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:36:43.986286 kubelet[2691]: I0710 23:36:43.986200 2691 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:36:43.988445 kubelet[2691]: I0710 23:36:43.987208 2691 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:36:43.989491 kubelet[2691]: I0710 23:36:43.989346 2691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:36:43.991245 kubelet[2691]: I0710 23:36:43.990101 2691 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:36:43.991679 kubelet[2691]: I0710 23:36:43.991635 2691 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:36:43.994261 kubelet[2691]: I0710 23:36:43.992213 2691 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:36:43.994261 kubelet[2691]: E0710 23:36:43.992550 2691 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-n-56a4dae949\" not found" Jul 10 23:36:43.994479 kubelet[2691]: I0710 23:36:43.994449 2691 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:36:43.994630 kubelet[2691]: I0710 23:36:43.994613 2691 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:36:43.997260 kubelet[2691]: I0710 23:36:43.997204 2691 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:36:44.000259 kubelet[2691]: I0710 23:36:43.998527 2691 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:36:44.000259 kubelet[2691]: I0710 23:36:43.998581 2691 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:36:44.000259 kubelet[2691]: I0710 23:36:43.998608 2691 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:36:44.000259 kubelet[2691]: I0710 23:36:43.998616 2691 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:36:44.000259 kubelet[2691]: E0710 23:36:43.998670 2691 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:36:44.007936 kubelet[2691]: I0710 23:36:44.005778 2691 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:36:44.007936 kubelet[2691]: I0710 23:36:44.005913 2691 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:36:44.014591 kubelet[2691]: I0710 23:36:44.014554 2691 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:36:44.023095 kubelet[2691]: E0710 23:36:44.023036 2691 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:36:44.082348 kubelet[2691]: I0710 23:36:44.082319 2691 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.082819 2691 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.082851 2691 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083009 2691 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083020 2691 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083038 2691 policy_none.go:49] "None policy: Start" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083067 2691 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083076 2691 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:36:44.083769 kubelet[2691]: I0710 23:36:44.083164 2691 state_mem.go:75] "Updated machine memory state" Jul 10 23:36:44.089208 kubelet[2691]: E0710 23:36:44.089164 2691 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:36:44.089480 kubelet[2691]: I0710 23:36:44.089433 2691 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:36:44.089541 kubelet[2691]: I0710 23:36:44.089464 2691 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:36:44.091292 kubelet[2691]: I0710 23:36:44.090975 2691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:36:44.092905 kubelet[2691]: E0710 23:36:44.092875 2691 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:36:44.101886 kubelet[2691]: I0710 23:36:44.099999 2691 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.103742 kubelet[2691]: I0710 23:36:44.100695 2691 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.103742 kubelet[2691]: I0710 23:36:44.100867 2691 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.124366 kubelet[2691]: E0710 23:36:44.124326 2691 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.193837 kubelet[2691]: I0710 23:36:44.193682 2691 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.208164 kubelet[2691]: I0710 23:36:44.208119 2691 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.208332 kubelet[2691]: I0710 23:36:44.208225 2691 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.231853 sudo[2729]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 23:36:44.232684 sudo[2729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 23:36:44.297242 kubelet[2691]: I0710 23:36:44.297180 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297398 kubelet[2691]: I0710 23:36:44.297263 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297398 kubelet[2691]: I0710 23:36:44.297290 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297398 kubelet[2691]: I0710 23:36:44.297324 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32d2a4a486522b0dd115327342c57ee5-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-n-56a4dae949\" (UID: \"32d2a4a486522b0dd115327342c57ee5\") " pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297398 kubelet[2691]: I0710 23:36:44.297347 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297398 kubelet[2691]: I0710 23:36:44.297381 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297511 kubelet[2691]: I0710 23:36:44.297402 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b590dbfa799512150261657d199c8d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-n-56a4dae949\" (UID: \"4b590dbfa799512150261657d199c8d1\") " pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297511 kubelet[2691]: I0710 23:36:44.297436 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.297511 kubelet[2691]: I0710 23:36:44.297468 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d586aa89cb912eb5501f0f8affb31f47-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" (UID: \"d586aa89cb912eb5501f0f8affb31f47\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:44.757153 sudo[2729]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:44.961089 kubelet[2691]: I0710 23:36:44.960325 2691 apiserver.go:52] "Watching apiserver" Jul 10 23:36:44.995858 kubelet[2691]: I0710 23:36:44.995757 2691 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:36:45.049746 kubelet[2691]: I0710 23:36:45.049495 2691 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:45.049746 kubelet[2691]: I0710 23:36:45.049491 2691 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:45.061163 kubelet[2691]: E0710 23:36:45.061122 2691 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-1-n-56a4dae949\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:45.062993 kubelet[2691]: E0710 23:36:45.062654 2691 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-1-n-56a4dae949\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" Jul 10 23:36:45.081431 kubelet[2691]: I0710 23:36:45.081194 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-1-n-56a4dae949" podStartSLOduration=1.081170087 podStartE2EDuration="1.081170087s" podCreationTimestamp="2025-07-10 23:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:45.07832043 +0000 UTC m=+1.222814209" watchObservedRunningTime="2025-07-10 23:36:45.081170087 +0000 UTC m=+1.225663826" Jul 10 23:36:45.113157 kubelet[2691]: I0710 23:36:45.112605 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-1-n-56a4dae949" podStartSLOduration=3.112581953 podStartE2EDuration="3.112581953s" podCreationTimestamp="2025-07-10 23:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:45.095152414 +0000 UTC m=+1.239646153" watchObservedRunningTime="2025-07-10 23:36:45.112581953 +0000 UTC m=+1.257075692" Jul 10 23:36:45.127482 kubelet[2691]: I0710 23:36:45.127420 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-1-n-56a4dae949" podStartSLOduration=1.1274019530000001 podStartE2EDuration="1.127401953s" podCreationTimestamp="2025-07-10 23:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:45.113346707 +0000 UTC m=+1.257840446" watchObservedRunningTime="2025-07-10 23:36:45.127401953 +0000 UTC m=+1.271895692" Jul 10 23:36:46.977495 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:47.141275 sshd[1768]: Connection closed by 139.178.89.65 port 49038 Jul 10 23:36:47.143195 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:47.153821 systemd[1]: sshd@7-49.13.217.224:22-139.178.89.65:49038.service: Deactivated successfully. Jul 10 23:36:47.157419 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:36:47.157622 systemd[1]: session-7.scope: Consumed 10.014s CPU time, 266.2M memory peak. Jul 10 23:36:47.160819 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:36:47.163080 systemd-logind[1470]: Removed session 7. Jul 10 23:36:48.244181 kubelet[2691]: I0710 23:36:48.243543 2691 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:36:48.244566 containerd[1496]: time="2025-07-10T23:36:48.243946097Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:36:48.247899 kubelet[2691]: I0710 23:36:48.246490 2691 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:36:48.900443 systemd[1]: Created slice kubepods-besteffort-pod13537dc1_1d54_47cd_8fad_acfbf4b99bf0.slice - libcontainer container kubepods-besteffort-pod13537dc1_1d54_47cd_8fad_acfbf4b99bf0.slice. Jul 10 23:36:48.927225 systemd[1]: Created slice kubepods-burstable-podcc603403_5115_4834_a76c_39beadd02155.slice - libcontainer container kubepods-burstable-podcc603403_5115_4834_a76c_39beadd02155.slice. Jul 10 23:36:48.931625 kubelet[2691]: I0710 23:36:48.931564 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc603403-5115-4834-a76c-39beadd02155-cilium-config-path\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.931625 kubelet[2691]: I0710 23:36:48.931620 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-lib-modules\") pod \"kube-proxy-wzl48\" (UID: \"13537dc1-1d54-47cd-8fad-acfbf4b99bf0\") " pod="kube-system/kube-proxy-wzl48" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931640 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-run\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931668 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cni-path\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931686 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-hubble-tls\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931707 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrvz\" (UniqueName: \"kubernetes.io/projected/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-kube-api-access-xhrvz\") pod \"kube-proxy-wzl48\" (UID: \"13537dc1-1d54-47cd-8fad-acfbf4b99bf0\") " pod="kube-system/kube-proxy-wzl48" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931735 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-bpf-maps\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.931929 kubelet[2691]: I0710 23:36:48.931834 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-hostproc\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932199 kubelet[2691]: I0710 23:36:48.931851 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc603403-5115-4834-a76c-39beadd02155-clustermesh-secrets\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932199 kubelet[2691]: I0710 23:36:48.931866 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-net\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932199 kubelet[2691]: I0710 23:36:48.931883 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-kube-proxy\") pod \"kube-proxy-wzl48\" (UID: \"13537dc1-1d54-47cd-8fad-acfbf4b99bf0\") " pod="kube-system/kube-proxy-wzl48" Jul 10 23:36:48.932199 kubelet[2691]: I0710 23:36:48.931899 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-etc-cni-netd\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932199 kubelet[2691]: I0710 23:36:48.931916 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-kernel\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932332 kubelet[2691]: I0710 23:36:48.931930 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2526g\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932332 kubelet[2691]: I0710 23:36:48.931955 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-xtables-lock\") pod \"kube-proxy-wzl48\" (UID: \"13537dc1-1d54-47cd-8fad-acfbf4b99bf0\") " pod="kube-system/kube-proxy-wzl48" Jul 10 23:36:48.932332 kubelet[2691]: I0710 23:36:48.931972 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-cgroup\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932332 kubelet[2691]: I0710 23:36:48.931986 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-lib-modules\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:48.932332 kubelet[2691]: I0710 23:36:48.932003 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-xtables-lock\") pod \"cilium-bgdr4\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " pod="kube-system/cilium-bgdr4" Jul 10 23:36:49.063892 kubelet[2691]: E0710 23:36:49.063047 2691 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 23:36:49.063892 kubelet[2691]: E0710 23:36:49.063090 2691 projected.go:194] Error preparing data for projected volume kube-api-access-2526g for pod kube-system/cilium-bgdr4: configmap "kube-root-ca.crt" not found Jul 10 23:36:49.063892 kubelet[2691]: E0710 23:36:49.063214 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g podName:cc603403-5115-4834-a76c-39beadd02155 nodeName:}" failed. No retries permitted until 2025-07-10 23:36:49.563178343 +0000 UTC m=+5.707672082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2526g" (UniqueName: "kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g") pod "cilium-bgdr4" (UID: "cc603403-5115-4834-a76c-39beadd02155") : configmap "kube-root-ca.crt" not found Jul 10 23:36:49.073833 kubelet[2691]: E0710 23:36:49.073787 2691 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 23:36:49.073833 kubelet[2691]: E0710 23:36:49.073828 2691 projected.go:194] Error preparing data for projected volume kube-api-access-xhrvz for pod kube-system/kube-proxy-wzl48: configmap "kube-root-ca.crt" not found Jul 10 23:36:49.074000 kubelet[2691]: E0710 23:36:49.073926 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-kube-api-access-xhrvz podName:13537dc1-1d54-47cd-8fad-acfbf4b99bf0 nodeName:}" failed. No retries permitted until 2025-07-10 23:36:49.573902076 +0000 UTC m=+5.718395815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xhrvz" (UniqueName: "kubernetes.io/projected/13537dc1-1d54-47cd-8fad-acfbf4b99bf0-kube-api-access-xhrvz") pod "kube-proxy-wzl48" (UID: "13537dc1-1d54-47cd-8fad-acfbf4b99bf0") : configmap "kube-root-ca.crt" not found Jul 10 23:36:49.436414 systemd[1]: Created slice kubepods-besteffort-podfd35a0a3_79f2_4fce_ac2a_9a7b237f8f7f.slice - libcontainer container kubepods-besteffort-podfd35a0a3_79f2_4fce_ac2a_9a7b237f8f7f.slice. Jul 10 23:36:49.437967 kubelet[2691]: I0710 23:36:49.437007 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bhxfr\" (UID: \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\") " pod="kube-system/cilium-operator-6c4d7847fc-bhxfr" Jul 10 23:36:49.437967 kubelet[2691]: I0710 23:36:49.437071 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ts8\" (UniqueName: \"kubernetes.io/projected/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-kube-api-access-x9ts8\") pod \"cilium-operator-6c4d7847fc-bhxfr\" (UID: \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\") " pod="kube-system/cilium-operator-6c4d7847fc-bhxfr" Jul 10 23:36:49.743324 containerd[1496]: time="2025-07-10T23:36:49.743255174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bhxfr,Uid:fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:49.778855 containerd[1496]: time="2025-07-10T23:36:49.777905958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:49.778855 containerd[1496]: time="2025-07-10T23:36:49.777976477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:49.778855 containerd[1496]: time="2025-07-10T23:36:49.777992117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.780293 containerd[1496]: time="2025-07-10T23:36:49.779220709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.802802 systemd[1]: Started cri-containerd-57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439.scope - libcontainer container 57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439. Jul 10 23:36:49.809892 containerd[1496]: time="2025-07-10T23:36:49.809714279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzl48,Uid:13537dc1-1d54-47cd-8fad-acfbf4b99bf0,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:49.841127 containerd[1496]: time="2025-07-10T23:36:49.841079203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bgdr4,Uid:cc603403-5115-4834-a76c-39beadd02155,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:49.848546 containerd[1496]: time="2025-07-10T23:36:49.846618168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:49.849673 containerd[1496]: time="2025-07-10T23:36:49.849366271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:49.849673 containerd[1496]: time="2025-07-10T23:36:49.849399271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.849673 containerd[1496]: time="2025-07-10T23:36:49.849511990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.861352 containerd[1496]: time="2025-07-10T23:36:49.861091998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bhxfr,Uid:fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\"" Jul 10 23:36:49.864721 containerd[1496]: time="2025-07-10T23:36:49.864670656Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 23:36:49.886768 systemd[1]: Started cri-containerd-62600b0c233725c31d27467a85bb14c6cb02fcbbf3f7a74f7febe866838927a4.scope - libcontainer container 62600b0c233725c31d27467a85bb14c6cb02fcbbf3f7a74f7febe866838927a4. Jul 10 23:36:49.894279 containerd[1496]: time="2025-07-10T23:36:49.894023792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:49.894279 containerd[1496]: time="2025-07-10T23:36:49.894096752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:49.894279 containerd[1496]: time="2025-07-10T23:36:49.894113032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.894279 containerd[1496]: time="2025-07-10T23:36:49.894215111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:49.927046 systemd[1]: Started cri-containerd-3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08.scope - libcontainer container 3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08. Jul 10 23:36:49.944902 containerd[1496]: time="2025-07-10T23:36:49.944848795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzl48,Uid:13537dc1-1d54-47cd-8fad-acfbf4b99bf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"62600b0c233725c31d27467a85bb14c6cb02fcbbf3f7a74f7febe866838927a4\"" Jul 10 23:36:49.953544 containerd[1496]: time="2025-07-10T23:36:49.953399661Z" level=info msg="CreateContainer within sandbox \"62600b0c233725c31d27467a85bb14c6cb02fcbbf3f7a74f7febe866838927a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:36:49.971217 containerd[1496]: time="2025-07-10T23:36:49.971148030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bgdr4,Uid:cc603403-5115-4834-a76c-39beadd02155,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\"" Jul 10 23:36:49.992183 containerd[1496]: time="2025-07-10T23:36:49.992078940Z" level=info msg="CreateContainer within sandbox \"62600b0c233725c31d27467a85bb14c6cb02fcbbf3f7a74f7febe866838927a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3923142783ec65d5abcae56aa02b153ee193345f8506f19cd496b3f0fcdf3123\"" Jul 10 23:36:49.993487 containerd[1496]: time="2025-07-10T23:36:49.993285452Z" level=info msg="StartContainer for \"3923142783ec65d5abcae56aa02b153ee193345f8506f19cd496b3f0fcdf3123\"" Jul 10 23:36:50.030601 systemd[1]: Started cri-containerd-3923142783ec65d5abcae56aa02b153ee193345f8506f19cd496b3f0fcdf3123.scope - libcontainer container 3923142783ec65d5abcae56aa02b153ee193345f8506f19cd496b3f0fcdf3123. Jul 10 23:36:50.089368 containerd[1496]: time="2025-07-10T23:36:50.089260927Z" level=info msg="StartContainer for \"3923142783ec65d5abcae56aa02b153ee193345f8506f19cd496b3f0fcdf3123\" returns successfully" Jul 10 23:36:51.121621 kubelet[2691]: I0710 23:36:51.120469 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzl48" podStartSLOduration=3.120436011 podStartE2EDuration="3.120436011s" podCreationTimestamp="2025-07-10 23:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:51.119013618 +0000 UTC m=+7.263507397" watchObservedRunningTime="2025-07-10 23:36:51.120436011 +0000 UTC m=+7.264929710" Jul 10 23:36:51.561885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616006816.mount: Deactivated successfully. Jul 10 23:36:52.937958 containerd[1496]: time="2025-07-10T23:36:52.936360680Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:52.939685 containerd[1496]: time="2025-07-10T23:36:52.939614263Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 23:36:52.941503 containerd[1496]: time="2025-07-10T23:36:52.941424214Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:52.943786 containerd[1496]: time="2025-07-10T23:36:52.943577763Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.078852508s" Jul 10 23:36:52.944027 containerd[1496]: time="2025-07-10T23:36:52.943984361Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 23:36:52.948011 containerd[1496]: time="2025-07-10T23:36:52.947948100Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 23:36:52.953533 containerd[1496]: time="2025-07-10T23:36:52.953479192Z" level=info msg="CreateContainer within sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 23:36:52.975867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360370670.mount: Deactivated successfully. Jul 10 23:36:52.979059 containerd[1496]: time="2025-07-10T23:36:52.979008260Z" level=info msg="CreateContainer within sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\"" Jul 10 23:36:52.981864 containerd[1496]: time="2025-07-10T23:36:52.980060215Z" level=info msg="StartContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\"" Jul 10 23:36:53.027605 systemd[1]: Started cri-containerd-ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b.scope - libcontainer container ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b. Jul 10 23:36:53.062947 containerd[1496]: time="2025-07-10T23:36:53.062892448Z" level=info msg="StartContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" returns successfully" Jul 10 23:36:54.197681 kubelet[2691]: I0710 23:36:54.196179 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bhxfr" podStartSLOduration=2.114585306 podStartE2EDuration="5.196155358s" podCreationTimestamp="2025-07-10 23:36:49 +0000 UTC" firstStartedPulling="2025-07-10 23:36:49.863854821 +0000 UTC m=+6.008348560" lastFinishedPulling="2025-07-10 23:36:52.945424873 +0000 UTC m=+9.089918612" observedRunningTime="2025-07-10 23:36:53.109381304 +0000 UTC m=+9.253875163" watchObservedRunningTime="2025-07-10 23:36:54.196155358 +0000 UTC m=+10.340649097" Jul 10 23:36:59.799105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211802477.mount: Deactivated successfully. Jul 10 23:37:01.408180 containerd[1496]: time="2025-07-10T23:37:01.408108473Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:37:01.411033 containerd[1496]: time="2025-07-10T23:37:01.410919905Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 23:37:01.425953 containerd[1496]: time="2025-07-10T23:37:01.425852382Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:37:01.428977 containerd[1496]: time="2025-07-10T23:37:01.428776133Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.480769993s" Jul 10 23:37:01.428977 containerd[1496]: time="2025-07-10T23:37:01.428841933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 23:37:01.437582 containerd[1496]: time="2025-07-10T23:37:01.437282749Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:37:01.456085 containerd[1496]: time="2025-07-10T23:37:01.455823255Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\"" Jul 10 23:37:01.457600 containerd[1496]: time="2025-07-10T23:37:01.457543250Z" level=info msg="StartContainer for \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\"" Jul 10 23:37:01.497618 systemd[1]: Started cri-containerd-e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df.scope - libcontainer container e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df. Jul 10 23:37:01.533417 containerd[1496]: time="2025-07-10T23:37:01.533303192Z" level=info msg="StartContainer for \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\" returns successfully" Jul 10 23:37:01.553597 systemd[1]: cri-containerd-e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df.scope: Deactivated successfully. Jul 10 23:37:01.730644 containerd[1496]: time="2025-07-10T23:37:01.730571264Z" level=info msg="shim disconnected" id=e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df namespace=k8s.io Jul 10 23:37:01.730644 containerd[1496]: time="2025-07-10T23:37:01.730633904Z" level=warning msg="cleaning up after shim disconnected" id=e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df namespace=k8s.io Jul 10 23:37:01.730644 containerd[1496]: time="2025-07-10T23:37:01.730644224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:37:02.125165 containerd[1496]: time="2025-07-10T23:37:02.124987750Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:37:02.151037 containerd[1496]: time="2025-07-10T23:37:02.150977200Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\"" Jul 10 23:37:02.153946 containerd[1496]: time="2025-07-10T23:37:02.153900032Z" level=info msg="StartContainer for \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\"" Jul 10 23:37:02.187604 systemd[1]: Started cri-containerd-a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496.scope - libcontainer container a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496. Jul 10 23:37:02.236953 containerd[1496]: time="2025-07-10T23:37:02.236779369Z" level=info msg="StartContainer for \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\" returns successfully" Jul 10 23:37:02.253398 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:37:02.254143 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:37:02.255112 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:37:02.263832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:37:02.264970 systemd[1]: cri-containerd-a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496.scope: Deactivated successfully. Jul 10 23:37:02.288815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:37:02.311496 containerd[1496]: time="2025-07-10T23:37:02.311390447Z" level=info msg="shim disconnected" id=a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496 namespace=k8s.io Jul 10 23:37:02.311496 containerd[1496]: time="2025-07-10T23:37:02.311482127Z" level=warning msg="cleaning up after shim disconnected" id=a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496 namespace=k8s.io Jul 10 23:37:02.311496 containerd[1496]: time="2025-07-10T23:37:02.311498447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:37:02.449626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df-rootfs.mount: Deactivated successfully. Jul 10 23:37:03.130458 containerd[1496]: time="2025-07-10T23:37:03.130394618Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:37:03.167662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457935132.mount: Deactivated successfully. Jul 10 23:37:03.175455 containerd[1496]: time="2025-07-10T23:37:03.175151584Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\"" Jul 10 23:37:03.177307 containerd[1496]: time="2025-07-10T23:37:03.176680381Z" level=info msg="StartContainer for \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\"" Jul 10 23:37:03.223279 systemd[1]: Started cri-containerd-e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba.scope - libcontainer container e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba. Jul 10 23:37:03.265740 containerd[1496]: time="2025-07-10T23:37:03.265125437Z" level=info msg="StartContainer for \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\" returns successfully" Jul 10 23:37:03.271906 systemd[1]: cri-containerd-e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba.scope: Deactivated successfully. Jul 10 23:37:03.304065 containerd[1496]: time="2025-07-10T23:37:03.303815339Z" level=info msg="shim disconnected" id=e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba namespace=k8s.io Jul 10 23:37:03.304065 containerd[1496]: time="2025-07-10T23:37:03.303884099Z" level=warning msg="cleaning up after shim disconnected" id=e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba namespace=k8s.io Jul 10 23:37:03.304065 containerd[1496]: time="2025-07-10T23:37:03.303893059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:37:03.454760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba-rootfs.mount: Deactivated successfully. Jul 10 23:37:04.133010 containerd[1496]: time="2025-07-10T23:37:04.132854821Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:37:04.181671 containerd[1496]: time="2025-07-10T23:37:04.181508826Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\"" Jul 10 23:37:04.184057 containerd[1496]: time="2025-07-10T23:37:04.183996380Z" level=info msg="StartContainer for \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\"" Jul 10 23:37:04.221339 systemd[1]: Started cri-containerd-6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5.scope - libcontainer container 6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5. Jul 10 23:37:04.256281 systemd[1]: cri-containerd-6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5.scope: Deactivated successfully. Jul 10 23:37:04.264766 containerd[1496]: time="2025-07-10T23:37:04.264710668Z" level=info msg="StartContainer for \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\" returns successfully" Jul 10 23:37:04.292472 containerd[1496]: time="2025-07-10T23:37:04.292376843Z" level=info msg="shim disconnected" id=6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5 namespace=k8s.io Jul 10 23:37:04.292472 containerd[1496]: time="2025-07-10T23:37:04.292452762Z" level=warning msg="cleaning up after shim disconnected" id=6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5 namespace=k8s.io Jul 10 23:37:04.292472 containerd[1496]: time="2025-07-10T23:37:04.292464602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:37:04.453423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5-rootfs.mount: Deactivated successfully. Jul 10 23:37:05.140410 containerd[1496]: time="2025-07-10T23:37:05.140347091Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:37:05.170053 containerd[1496]: time="2025-07-10T23:37:05.169539106Z" level=info msg="CreateContainer within sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\"" Jul 10 23:37:05.172268 containerd[1496]: time="2025-07-10T23:37:05.171181863Z" level=info msg="StartContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\"" Jul 10 23:37:05.209045 systemd[1]: run-containerd-runc-k8s.io-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281-runc.2cVBr2.mount: Deactivated successfully. Jul 10 23:37:05.221510 systemd[1]: Started cri-containerd-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281.scope - libcontainer container e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281. Jul 10 23:37:05.285854 containerd[1496]: time="2025-07-10T23:37:05.285674928Z" level=info msg="StartContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" returns successfully" Jul 10 23:37:05.427195 kubelet[2691]: I0710 23:37:05.425942 2691 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:37:05.485713 systemd[1]: Created slice kubepods-burstable-pod05b9ebbc_1a42_4664_bce4_998d0d76e55e.slice - libcontainer container kubepods-burstable-pod05b9ebbc_1a42_4664_bce4_998d0d76e55e.slice. Jul 10 23:37:05.497191 systemd[1]: Created slice kubepods-burstable-podd04526e2_40a9_4995_9fd3_3eb7b19072f5.slice - libcontainer container kubepods-burstable-podd04526e2_40a9_4995_9fd3_3eb7b19072f5.slice. Jul 10 23:37:05.569209 kubelet[2691]: I0710 23:37:05.569121 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05b9ebbc-1a42-4664-bce4-998d0d76e55e-config-volume\") pod \"coredns-674b8bbfcf-rlwsd\" (UID: \"05b9ebbc-1a42-4664-bce4-998d0d76e55e\") " pod="kube-system/coredns-674b8bbfcf-rlwsd" Jul 10 23:37:05.569209 kubelet[2691]: I0710 23:37:05.569188 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d04526e2-40a9-4995-9fd3-3eb7b19072f5-config-volume\") pod \"coredns-674b8bbfcf-f9kbr\" (UID: \"d04526e2-40a9-4995-9fd3-3eb7b19072f5\") " pod="kube-system/coredns-674b8bbfcf-f9kbr" Jul 10 23:37:05.569209 kubelet[2691]: I0710 23:37:05.569208 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz4xl\" (UniqueName: \"kubernetes.io/projected/05b9ebbc-1a42-4664-bce4-998d0d76e55e-kube-api-access-hz4xl\") pod \"coredns-674b8bbfcf-rlwsd\" (UID: \"05b9ebbc-1a42-4664-bce4-998d0d76e55e\") " pod="kube-system/coredns-674b8bbfcf-rlwsd" Jul 10 23:37:05.569209 kubelet[2691]: I0710 23:37:05.569228 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhk5\" (UniqueName: \"kubernetes.io/projected/d04526e2-40a9-4995-9fd3-3eb7b19072f5-kube-api-access-cmhk5\") pod \"coredns-674b8bbfcf-f9kbr\" (UID: \"d04526e2-40a9-4995-9fd3-3eb7b19072f5\") " pod="kube-system/coredns-674b8bbfcf-f9kbr" Jul 10 23:37:05.795941 containerd[1496]: time="2025-07-10T23:37:05.795604353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rlwsd,Uid:05b9ebbc-1a42-4664-bce4-998d0d76e55e,Namespace:kube-system,Attempt:0,}" Jul 10 23:37:05.803914 containerd[1496]: time="2025-07-10T23:37:05.803498616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f9kbr,Uid:d04526e2-40a9-4995-9fd3-3eb7b19072f5,Namespace:kube-system,Attempt:0,}" Jul 10 23:37:06.170668 kubelet[2691]: I0710 23:37:06.170385 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bgdr4" podStartSLOduration=6.7134149910000005 podStartE2EDuration="18.170362183s" podCreationTimestamp="2025-07-10 23:36:48 +0000 UTC" firstStartedPulling="2025-07-10 23:36:49.973088578 +0000 UTC m=+6.117582317" lastFinishedPulling="2025-07-10 23:37:01.43003577 +0000 UTC m=+17.574529509" observedRunningTime="2025-07-10 23:37:06.168625787 +0000 UTC m=+22.313119566" watchObservedRunningTime="2025-07-10 23:37:06.170362183 +0000 UTC m=+22.314855962" Jul 10 23:37:07.622286 systemd-networkd[1374]: cilium_host: Link UP Jul 10 23:37:07.622758 systemd-networkd[1374]: cilium_net: Link UP Jul 10 23:37:07.623410 systemd-networkd[1374]: cilium_net: Gained carrier Jul 10 23:37:07.624078 systemd-networkd[1374]: cilium_host: Gained carrier Jul 10 23:37:07.766480 systemd-networkd[1374]: cilium_vxlan: Link UP Jul 10 23:37:07.766496 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jul 10 23:37:08.089326 kernel: NET: Registered PF_ALG protocol family Jul 10 23:37:08.533542 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jul 10 23:37:08.597498 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jul 10 23:37:08.931614 systemd-networkd[1374]: lxc_health: Link UP Jul 10 23:37:08.937407 systemd-networkd[1374]: lxc_health: Gained carrier Jul 10 23:37:09.109651 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jul 10 23:37:09.400268 systemd-networkd[1374]: lxc3a0621d448fe: Link UP Jul 10 23:37:09.406388 kernel: eth0: renamed from tmpbc4fd Jul 10 23:37:09.427345 kernel: eth0: renamed from tmp67c2d Jul 10 23:37:09.435615 systemd-networkd[1374]: lxc3a0621d448fe: Gained carrier Jul 10 23:37:09.435897 systemd-networkd[1374]: lxc29f959424ce8: Link UP Jul 10 23:37:09.436313 systemd-networkd[1374]: lxc29f959424ce8: Gained carrier Jul 10 23:37:10.968373 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 10 23:37:11.157397 systemd-networkd[1374]: lxc29f959424ce8: Gained IPv6LL Jul 10 23:37:11.223347 systemd-networkd[1374]: lxc3a0621d448fe: Gained IPv6LL Jul 10 23:37:13.956489 containerd[1496]: time="2025-07-10T23:37:13.956224138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:37:13.956489 containerd[1496]: time="2025-07-10T23:37:13.956432458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:37:13.956942 containerd[1496]: time="2025-07-10T23:37:13.956491778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:37:13.956942 containerd[1496]: time="2025-07-10T23:37:13.956744378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:37:13.992036 containerd[1496]: time="2025-07-10T23:37:13.990772172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:37:13.992036 containerd[1496]: time="2025-07-10T23:37:13.990850692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:37:13.992036 containerd[1496]: time="2025-07-10T23:37:13.990869932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:37:13.992036 containerd[1496]: time="2025-07-10T23:37:13.990954892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:37:13.997671 systemd[1]: Started cri-containerd-bc4fd3fbadb59a3a53c27523e2fda3b990d152c7c8834d5b16d7ab883c82f0b3.scope - libcontainer container bc4fd3fbadb59a3a53c27523e2fda3b990d152c7c8834d5b16d7ab883c82f0b3. Jul 10 23:37:14.040114 systemd[1]: run-containerd-runc-k8s.io-67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f-runc.ubN1Hl.mount: Deactivated successfully. Jul 10 23:37:14.051512 systemd[1]: Started cri-containerd-67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f.scope - libcontainer container 67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f. Jul 10 23:37:14.112485 containerd[1496]: time="2025-07-10T23:37:14.112423380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f9kbr,Uid:d04526e2-40a9-4995-9fd3-3eb7b19072f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc4fd3fbadb59a3a53c27523e2fda3b990d152c7c8834d5b16d7ab883c82f0b3\"" Jul 10 23:37:14.125507 containerd[1496]: time="2025-07-10T23:37:14.125440684Z" level=info msg="CreateContainer within sandbox \"bc4fd3fbadb59a3a53c27523e2fda3b990d152c7c8834d5b16d7ab883c82f0b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:37:14.146487 containerd[1496]: time="2025-07-10T23:37:14.146421578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rlwsd,Uid:05b9ebbc-1a42-4664-bce4-998d0d76e55e,Namespace:kube-system,Attempt:0,} returns sandbox id \"67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f\"" Jul 10 23:37:14.152400 containerd[1496]: time="2025-07-10T23:37:14.152338251Z" level=info msg="CreateContainer within sandbox \"bc4fd3fbadb59a3a53c27523e2fda3b990d152c7c8834d5b16d7ab883c82f0b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba610aab655b1cb2eb2cb33ea3050d19fc8d294d7ea0b3c6be02208a0178ff39\"" Jul 10 23:37:14.153166 containerd[1496]: time="2025-07-10T23:37:14.153071490Z" level=info msg="StartContainer for \"ba610aab655b1cb2eb2cb33ea3050d19fc8d294d7ea0b3c6be02208a0178ff39\"" Jul 10 23:37:14.155688 containerd[1496]: time="2025-07-10T23:37:14.154977367Z" level=info msg="CreateContainer within sandbox \"67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:37:14.203812 containerd[1496]: time="2025-07-10T23:37:14.203667907Z" level=info msg="CreateContainer within sandbox \"67c2df8abec8524404cb3c0989ab929de230fa6dc2aa9dcf1a22e53e1f7e595f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"294005de15ef00cf11cddfcac0b43b8f545c57a90df99becbf6c5f06d7f35c36\"" Jul 10 23:37:14.205064 containerd[1496]: time="2025-07-10T23:37:14.205006425Z" level=info msg="StartContainer for \"294005de15ef00cf11cddfcac0b43b8f545c57a90df99becbf6c5f06d7f35c36\"" Jul 10 23:37:14.226638 systemd[1]: Started cri-containerd-ba610aab655b1cb2eb2cb33ea3050d19fc8d294d7ea0b3c6be02208a0178ff39.scope - libcontainer container ba610aab655b1cb2eb2cb33ea3050d19fc8d294d7ea0b3c6be02208a0178ff39. Jul 10 23:37:14.255648 systemd[1]: Started cri-containerd-294005de15ef00cf11cddfcac0b43b8f545c57a90df99becbf6c5f06d7f35c36.scope - libcontainer container 294005de15ef00cf11cddfcac0b43b8f545c57a90df99becbf6c5f06d7f35c36. Jul 10 23:37:14.298589 containerd[1496]: time="2025-07-10T23:37:14.298329429Z" level=info msg="StartContainer for \"ba610aab655b1cb2eb2cb33ea3050d19fc8d294d7ea0b3c6be02208a0178ff39\" returns successfully" Jul 10 23:37:14.308968 containerd[1496]: time="2025-07-10T23:37:14.308714016Z" level=info msg="StartContainer for \"294005de15ef00cf11cddfcac0b43b8f545c57a90df99becbf6c5f06d7f35c36\" returns successfully" Jul 10 23:37:15.262432 kubelet[2691]: I0710 23:37:15.262255 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f9kbr" podStartSLOduration=26.26221441 podStartE2EDuration="26.26221441s" podCreationTimestamp="2025-07-10 23:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:37:15.221669857 +0000 UTC m=+31.366163716" watchObservedRunningTime="2025-07-10 23:37:15.26221441 +0000 UTC m=+31.406708149" Jul 10 23:37:15.264189 kubelet[2691]: I0710 23:37:15.263802 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rlwsd" podStartSLOduration=26.263782728 podStartE2EDuration="26.263782728s" podCreationTimestamp="2025-07-10 23:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:37:15.263325968 +0000 UTC m=+31.407819707" watchObservedRunningTime="2025-07-10 23:37:15.263782728 +0000 UTC m=+31.408276467" Jul 10 23:37:41.128709 systemd[1]: Started sshd@9-49.13.217.224:22-103.99.206.83:35162.service - OpenSSH per-connection server daemon (103.99.206.83:35162). Jul 10 23:37:41.498530 sshd[4080]: Connection closed by 103.99.206.83 port 35162 [preauth] Jul 10 23:37:41.499872 systemd[1]: sshd@9-49.13.217.224:22-103.99.206.83:35162.service: Deactivated successfully. Jul 10 23:38:28.048741 systemd[1]: Started sshd@10-49.13.217.224:22-139.178.89.65:46260.service - OpenSSH per-connection server daemon (139.178.89.65:46260). Jul 10 23:38:29.046858 sshd[4093]: Accepted publickey for core from 139.178.89.65 port 46260 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:29.049896 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:29.059282 systemd-logind[1470]: New session 8 of user core. Jul 10 23:38:29.066731 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:38:29.830275 sshd[4095]: Connection closed by 139.178.89.65 port 46260 Jul 10 23:38:29.831012 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:29.839300 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:38:29.839776 systemd[1]: sshd@10-49.13.217.224:22-139.178.89.65:46260.service: Deactivated successfully. Jul 10 23:38:29.842472 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:38:29.847153 systemd-logind[1470]: Removed session 8. Jul 10 23:38:35.008823 systemd[1]: Started sshd@11-49.13.217.224:22-139.178.89.65:52736.service - OpenSSH per-connection server daemon (139.178.89.65:52736). Jul 10 23:38:35.997108 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 52736 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:35.999463 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:36.006512 systemd-logind[1470]: New session 9 of user core. Jul 10 23:38:36.011503 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:38:36.752202 sshd[4110]: Connection closed by 139.178.89.65 port 52736 Jul 10 23:38:36.752984 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:36.757610 systemd[1]: sshd@11-49.13.217.224:22-139.178.89.65:52736.service: Deactivated successfully. Jul 10 23:38:36.760110 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:38:36.762865 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:38:36.763867 systemd-logind[1470]: Removed session 9. Jul 10 23:38:41.940425 systemd[1]: Started sshd@12-49.13.217.224:22-139.178.89.65:39126.service - OpenSSH per-connection server daemon (139.178.89.65:39126). Jul 10 23:38:42.934975 sshd[4124]: Accepted publickey for core from 139.178.89.65 port 39126 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:42.937107 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:42.944544 systemd-logind[1470]: New session 10 of user core. Jul 10 23:38:42.954517 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:38:43.709025 sshd[4126]: Connection closed by 139.178.89.65 port 39126 Jul 10 23:38:43.711611 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:43.719144 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:38:43.719784 systemd[1]: sshd@12-49.13.217.224:22-139.178.89.65:39126.service: Deactivated successfully. Jul 10 23:38:43.725137 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:38:43.727420 systemd-logind[1470]: Removed session 10. Jul 10 23:38:43.894742 systemd[1]: Started sshd@13-49.13.217.224:22-139.178.89.65:39140.service - OpenSSH per-connection server daemon (139.178.89.65:39140). Jul 10 23:38:44.908553 sshd[4139]: Accepted publickey for core from 139.178.89.65 port 39140 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:44.911344 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:44.917592 systemd-logind[1470]: New session 11 of user core. Jul 10 23:38:44.923614 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:38:45.726261 sshd[4145]: Connection closed by 139.178.89.65 port 39140 Jul 10 23:38:45.725542 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:45.731985 systemd[1]: sshd@13-49.13.217.224:22-139.178.89.65:39140.service: Deactivated successfully. Jul 10 23:38:45.736401 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:38:45.737815 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:38:45.740229 systemd-logind[1470]: Removed session 11. Jul 10 23:38:45.905668 systemd[1]: Started sshd@14-49.13.217.224:22-139.178.89.65:39150.service - OpenSSH per-connection server daemon (139.178.89.65:39150). Jul 10 23:38:46.922869 sshd[4154]: Accepted publickey for core from 139.178.89.65 port 39150 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:46.924927 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:46.931007 systemd-logind[1470]: New session 12 of user core. Jul 10 23:38:46.937508 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:38:47.285796 systemd[1]: Started sshd@15-49.13.217.224:22-103.99.206.83:57716.service - OpenSSH per-connection server daemon (103.99.206.83:57716). Jul 10 23:38:47.628758 sshd[4158]: Connection closed by 103.99.206.83 port 57716 [preauth] Jul 10 23:38:47.632082 systemd[1]: sshd@15-49.13.217.224:22-103.99.206.83:57716.service: Deactivated successfully. Jul 10 23:38:47.708353 sshd[4156]: Connection closed by 139.178.89.65 port 39150 Jul 10 23:38:47.709487 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:47.718115 systemd[1]: sshd@14-49.13.217.224:22-139.178.89.65:39150.service: Deactivated successfully. Jul 10 23:38:47.722070 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:38:47.723564 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:38:47.724707 systemd-logind[1470]: Removed session 12. Jul 10 23:38:52.883659 systemd[1]: Started sshd@16-49.13.217.224:22-139.178.89.65:35858.service - OpenSSH per-connection server daemon (139.178.89.65:35858). Jul 10 23:38:53.864061 sshd[4176]: Accepted publickey for core from 139.178.89.65 port 35858 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:53.867345 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:53.873960 systemd-logind[1470]: New session 13 of user core. Jul 10 23:38:53.880669 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:38:54.633481 sshd[4178]: Connection closed by 139.178.89.65 port 35858 Jul 10 23:38:54.635650 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:54.640071 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:38:54.641349 systemd[1]: sshd@16-49.13.217.224:22-139.178.89.65:35858.service: Deactivated successfully. Jul 10 23:38:54.643661 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:38:54.644831 systemd-logind[1470]: Removed session 13. Jul 10 23:38:54.817001 systemd[1]: Started sshd@17-49.13.217.224:22-139.178.89.65:35864.service - OpenSSH per-connection server daemon (139.178.89.65:35864). Jul 10 23:38:55.810819 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 35864 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:55.814047 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:55.821089 systemd-logind[1470]: New session 14 of user core. Jul 10 23:38:55.829282 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:38:56.615427 sshd[4192]: Connection closed by 139.178.89.65 port 35864 Jul 10 23:38:56.616504 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:56.621532 systemd[1]: sshd@17-49.13.217.224:22-139.178.89.65:35864.service: Deactivated successfully. Jul 10 23:38:56.624458 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:38:56.625664 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:38:56.627745 systemd-logind[1470]: Removed session 14. Jul 10 23:38:56.811343 systemd[1]: Started sshd@18-49.13.217.224:22-139.178.89.65:35868.service - OpenSSH per-connection server daemon (139.178.89.65:35868). Jul 10 23:38:57.822838 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 35868 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:38:57.824869 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:57.833317 systemd-logind[1470]: New session 15 of user core. Jul 10 23:38:57.839348 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:38:59.569283 sshd[4204]: Connection closed by 139.178.89.65 port 35868 Jul 10 23:38:59.568732 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:59.575220 systemd[1]: sshd@18-49.13.217.224:22-139.178.89.65:35868.service: Deactivated successfully. Jul 10 23:38:59.579464 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:38:59.581982 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:38:59.583856 systemd-logind[1470]: Removed session 15. Jul 10 23:38:59.753296 systemd[1]: Started sshd@19-49.13.217.224:22-139.178.89.65:35882.service - OpenSSH per-connection server daemon (139.178.89.65:35882). Jul 10 23:39:00.755563 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 35882 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:00.758037 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:00.766383 systemd-logind[1470]: New session 16 of user core. Jul 10 23:39:00.771642 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:39:01.685599 sshd[4223]: Connection closed by 139.178.89.65 port 35882 Jul 10 23:39:01.687424 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:01.697494 systemd[1]: sshd@19-49.13.217.224:22-139.178.89.65:35882.service: Deactivated successfully. Jul 10 23:39:01.708065 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:39:01.717343 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:39:01.718738 systemd-logind[1470]: Removed session 16. Jul 10 23:39:01.877936 systemd[1]: Started sshd@20-49.13.217.224:22-139.178.89.65:47862.service - OpenSSH per-connection server daemon (139.178.89.65:47862). Jul 10 23:39:02.895799 sshd[4233]: Accepted publickey for core from 139.178.89.65 port 47862 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:02.900407 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:02.908688 systemd-logind[1470]: New session 17 of user core. Jul 10 23:39:02.922571 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:39:03.672670 sshd[4235]: Connection closed by 139.178.89.65 port 47862 Jul 10 23:39:03.675121 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:03.683207 systemd[1]: sshd@20-49.13.217.224:22-139.178.89.65:47862.service: Deactivated successfully. Jul 10 23:39:03.688155 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:39:03.690550 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:39:03.691837 systemd-logind[1470]: Removed session 17. Jul 10 23:39:08.862788 systemd[1]: Started sshd@21-49.13.217.224:22-139.178.89.65:47870.service - OpenSSH per-connection server daemon (139.178.89.65:47870). Jul 10 23:39:09.880504 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 47870 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:09.883488 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:09.892479 systemd-logind[1470]: New session 18 of user core. Jul 10 23:39:09.897517 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:39:10.692765 sshd[4252]: Connection closed by 139.178.89.65 port 47870 Jul 10 23:39:10.692614 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:10.698559 systemd[1]: sshd@21-49.13.217.224:22-139.178.89.65:47870.service: Deactivated successfully. Jul 10 23:39:10.703002 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:39:10.704145 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:39:10.705803 systemd-logind[1470]: Removed session 18. Jul 10 23:39:15.866736 systemd[1]: Started sshd@22-49.13.217.224:22-139.178.89.65:37902.service - OpenSSH per-connection server daemon (139.178.89.65:37902). Jul 10 23:39:16.865323 sshd[4264]: Accepted publickey for core from 139.178.89.65 port 37902 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:16.867146 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:16.874356 systemd-logind[1470]: New session 19 of user core. Jul 10 23:39:16.881597 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:39:17.623902 sshd[4266]: Connection closed by 139.178.89.65 port 37902 Jul 10 23:39:17.624713 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:17.630002 systemd[1]: sshd@22-49.13.217.224:22-139.178.89.65:37902.service: Deactivated successfully. Jul 10 23:39:17.634020 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:39:17.637151 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:39:17.638775 systemd-logind[1470]: Removed session 19. Jul 10 23:39:17.822146 systemd[1]: Started sshd@23-49.13.217.224:22-139.178.89.65:37910.service - OpenSSH per-connection server daemon (139.178.89.65:37910). Jul 10 23:39:18.829490 sshd[4278]: Accepted publickey for core from 139.178.89.65 port 37910 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:18.831933 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:18.841208 systemd-logind[1470]: New session 20 of user core. Jul 10 23:39:18.850726 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:39:21.307491 containerd[1496]: time="2025-07-10T23:39:21.303650925Z" level=info msg="StopContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" with timeout 30 (s)" Jul 10 23:39:21.312521 containerd[1496]: time="2025-07-10T23:39:21.312476038Z" level=info msg="Stop container \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" with signal terminated" Jul 10 23:39:21.314005 systemd[1]: run-containerd-runc-k8s.io-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281-runc.YBGRiK.mount: Deactivated successfully. Jul 10 23:39:21.331924 containerd[1496]: time="2025-07-10T23:39:21.328776470Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:39:21.339290 containerd[1496]: time="2025-07-10T23:39:21.338845096Z" level=info msg="StopContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" with timeout 2 (s)" Jul 10 23:39:21.340356 containerd[1496]: time="2025-07-10T23:39:21.340021730Z" level=info msg="Stop container \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" with signal terminated" Jul 10 23:39:21.341711 systemd[1]: cri-containerd-ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b.scope: Deactivated successfully. Jul 10 23:39:21.356085 systemd-networkd[1374]: lxc_health: Link DOWN Jul 10 23:39:21.356095 systemd-networkd[1374]: lxc_health: Lost carrier Jul 10 23:39:21.390392 systemd[1]: cri-containerd-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281.scope: Deactivated successfully. Jul 10 23:39:21.390976 systemd[1]: cri-containerd-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281.scope: Consumed 8.212s CPU time, 123.5M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:39:21.398699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b-rootfs.mount: Deactivated successfully. Jul 10 23:39:21.416902 containerd[1496]: time="2025-07-10T23:39:21.416515079Z" level=info msg="shim disconnected" id=ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b namespace=k8s.io Jul 10 23:39:21.416902 containerd[1496]: time="2025-07-10T23:39:21.416674799Z" level=warning msg="cleaning up after shim disconnected" id=ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b namespace=k8s.io Jul 10 23:39:21.416902 containerd[1496]: time="2025-07-10T23:39:21.416685598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:21.431509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281-rootfs.mount: Deactivated successfully. Jul 10 23:39:21.442935 containerd[1496]: time="2025-07-10T23:39:21.442836658Z" level=info msg="shim disconnected" id=e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281 namespace=k8s.io Jul 10 23:39:21.442935 containerd[1496]: time="2025-07-10T23:39:21.442931298Z" level=warning msg="cleaning up after shim disconnected" id=e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281 namespace=k8s.io Jul 10 23:39:21.442935 containerd[1496]: time="2025-07-10T23:39:21.442944658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:21.445460 containerd[1496]: time="2025-07-10T23:39:21.445188966Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:39:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:39:21.450884 containerd[1496]: time="2025-07-10T23:39:21.450698696Z" level=info msg="StopContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" returns successfully" Jul 10 23:39:21.452208 containerd[1496]: time="2025-07-10T23:39:21.451914209Z" level=info msg="StopPodSandbox for \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\"" Jul 10 23:39:21.452208 containerd[1496]: time="2025-07-10T23:39:21.451991129Z" level=info msg="Container to stop \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.456091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439-shm.mount: Deactivated successfully. Jul 10 23:39:21.464364 containerd[1496]: time="2025-07-10T23:39:21.464292303Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:39:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:39:21.467478 systemd[1]: cri-containerd-57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439.scope: Deactivated successfully. Jul 10 23:39:21.472072 containerd[1496]: time="2025-07-10T23:39:21.471916702Z" level=info msg="StopContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" returns successfully" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472511739Z" level=info msg="StopPodSandbox for \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\"" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472557739Z" level=info msg="Container to stop \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472572659Z" level=info msg="Container to stop \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472582819Z" level=info msg="Container to stop \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472593499Z" level=info msg="Container to stop \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.472619 containerd[1496]: time="2025-07-10T23:39:21.472603258Z" level=info msg="Container to stop \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:39:21.484957 systemd[1]: cri-containerd-3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08.scope: Deactivated successfully. Jul 10 23:39:21.514367 containerd[1496]: time="2025-07-10T23:39:21.514172195Z" level=info msg="shim disconnected" id=57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439 namespace=k8s.io Jul 10 23:39:21.514367 containerd[1496]: time="2025-07-10T23:39:21.514360634Z" level=warning msg="cleaning up after shim disconnected" id=57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439 namespace=k8s.io Jul 10 23:39:21.514698 containerd[1496]: time="2025-07-10T23:39:21.514374194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:21.526944 containerd[1496]: time="2025-07-10T23:39:21.526778168Z" level=info msg="shim disconnected" id=3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08 namespace=k8s.io Jul 10 23:39:21.526944 containerd[1496]: time="2025-07-10T23:39:21.526848287Z" level=warning msg="cleaning up after shim disconnected" id=3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08 namespace=k8s.io Jul 10 23:39:21.526944 containerd[1496]: time="2025-07-10T23:39:21.526910167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:21.538126 containerd[1496]: time="2025-07-10T23:39:21.537732189Z" level=info msg="TearDown network for sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" successfully" Jul 10 23:39:21.538126 containerd[1496]: time="2025-07-10T23:39:21.537796709Z" level=info msg="StopPodSandbox for \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" returns successfully" Jul 10 23:39:21.557347 containerd[1496]: time="2025-07-10T23:39:21.555360374Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:39:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:39:21.558315 containerd[1496]: time="2025-07-10T23:39:21.557573003Z" level=info msg="TearDown network for sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" successfully" Jul 10 23:39:21.558410 kubelet[2691]: I0710 23:39:21.557768 2691 scope.go:117] "RemoveContainer" containerID="ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b" Jul 10 23:39:21.559739 containerd[1496]: time="2025-07-10T23:39:21.559191394Z" level=info msg="StopPodSandbox for \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" returns successfully" Jul 10 23:39:21.563520 containerd[1496]: time="2025-07-10T23:39:21.563458051Z" level=info msg="RemoveContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\"" Jul 10 23:39:21.572642 containerd[1496]: time="2025-07-10T23:39:21.572586762Z" level=info msg="RemoveContainer for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" returns successfully" Jul 10 23:39:21.575456 kubelet[2691]: I0710 23:39:21.575409 2691 scope.go:117] "RemoveContainer" containerID="ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b" Jul 10 23:39:21.576039 containerd[1496]: time="2025-07-10T23:39:21.575841465Z" level=error msg="ContainerStatus for \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\": not found" Jul 10 23:39:21.577993 kubelet[2691]: E0710 23:39:21.577922 2691 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\": not found" containerID="ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b" Jul 10 23:39:21.577993 kubelet[2691]: I0710 23:39:21.577968 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b"} err="failed to get container status \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad2ea06d45c35745c4da5f4a11c546f16d5196905d58f9b283b4e5dbe4ce7e8b\": not found" Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707492 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-hostproc\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707586 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc603403-5115-4834-a76c-39beadd02155-clustermesh-secrets\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707635 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-kernel\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707682 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2526g\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707717 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-cgroup\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708291 kubelet[2691]: I0710 23:39:21.707750 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cni-path\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.707787 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9ts8\" (UniqueName: \"kubernetes.io/projected/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-kube-api-access-x9ts8\") pod \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\" (UID: \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.707823 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-xtables-lock\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.707881 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-cilium-config-path\") pod \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\" (UID: \"fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.707927 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc603403-5115-4834-a76c-39beadd02155-cilium-config-path\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.707966 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-hubble-tls\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708617 kubelet[2691]: I0710 23:39:21.708001 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-bpf-maps\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708784 kubelet[2691]: I0710 23:39:21.708035 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-lib-modules\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708784 kubelet[2691]: I0710 23:39:21.708074 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-etc-cni-netd\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708784 kubelet[2691]: I0710 23:39:21.708110 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-net\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708784 kubelet[2691]: I0710 23:39:21.708144 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-run\") pod \"cc603403-5115-4834-a76c-39beadd02155\" (UID: \"cc603403-5115-4834-a76c-39beadd02155\") " Jul 10 23:39:21.708784 kubelet[2691]: I0710 23:39:21.708365 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-hostproc" (OuterVolumeSpecName: "hostproc") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.708979 kubelet[2691]: I0710 23:39:21.708894 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.712453 kubelet[2691]: I0710 23:39:21.712374 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.715055 kubelet[2691]: I0710 23:39:21.714185 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.715354 kubelet[2691]: I0710 23:39:21.715328 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.717201 kubelet[2691]: I0710 23:39:21.714955 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f" (UID: "fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:39:21.717408 kubelet[2691]: I0710 23:39:21.714984 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cni-path" (OuterVolumeSpecName: "cni-path") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.718841 kubelet[2691]: I0710 23:39:21.718762 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.718841 kubelet[2691]: I0710 23:39:21.718820 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.718841 kubelet[2691]: I0710 23:39:21.718838 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.719113 kubelet[2691]: I0710 23:39:21.718866 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:39:21.719113 kubelet[2691]: I0710 23:39:21.718947 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc603403-5115-4834-a76c-39beadd02155-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:39:21.721732 kubelet[2691]: I0710 23:39:21.721420 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g" (OuterVolumeSpecName: "kube-api-access-2526g") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "kube-api-access-2526g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:39:21.721732 kubelet[2691]: I0710 23:39:21.721532 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc603403-5115-4834-a76c-39beadd02155-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:39:21.722520 kubelet[2691]: I0710 23:39:21.722475 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-kube-api-access-x9ts8" (OuterVolumeSpecName: "kube-api-access-x9ts8") pod "fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f" (UID: "fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f"). InnerVolumeSpecName "kube-api-access-x9ts8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:39:21.724127 kubelet[2691]: I0710 23:39:21.724089 2691 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cc603403-5115-4834-a76c-39beadd02155" (UID: "cc603403-5115-4834-a76c-39beadd02155"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809046 2691 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-xtables-lock\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809084 2691 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-cilium-config-path\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809094 2691 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc603403-5115-4834-a76c-39beadd02155-cilium-config-path\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809105 2691 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-hubble-tls\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809114 2691 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-bpf-maps\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809122 2691 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-lib-modules\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809130 2691 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-etc-cni-netd\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809390 kubelet[2691]: I0710 23:39:21.809138 2691 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-net\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809146 2691 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-run\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809154 2691 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-hostproc\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809162 2691 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc603403-5115-4834-a76c-39beadd02155-clustermesh-secrets\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809173 2691 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-host-proc-sys-kernel\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809182 2691 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2526g\" (UniqueName: \"kubernetes.io/projected/cc603403-5115-4834-a76c-39beadd02155-kube-api-access-2526g\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.809726 kubelet[2691]: I0710 23:39:21.809192 2691 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cilium-cgroup\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.810319 kubelet[2691]: I0710 23:39:21.810257 2691 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc603403-5115-4834-a76c-39beadd02155-cni-path\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.810319 kubelet[2691]: I0710 23:39:21.810314 2691 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x9ts8\" (UniqueName: \"kubernetes.io/projected/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f-kube-api-access-x9ts8\") on node \"ci-4230-2-1-n-56a4dae949\" DevicePath \"\"" Jul 10 23:39:21.864432 systemd[1]: Removed slice kubepods-besteffort-podfd35a0a3_79f2_4fce_ac2a_9a7b237f8f7f.slice - libcontainer container kubepods-besteffort-podfd35a0a3_79f2_4fce_ac2a_9a7b237f8f7f.slice. Jul 10 23:39:22.005900 kubelet[2691]: I0710 23:39:22.005814 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f" path="/var/lib/kubelet/pods/fd35a0a3-79f2-4fce-ac2a-9a7b237f8f7f/volumes" Jul 10 23:39:22.013670 systemd[1]: Removed slice kubepods-burstable-podcc603403_5115_4834_a76c_39beadd02155.slice - libcontainer container kubepods-burstable-podcc603403_5115_4834_a76c_39beadd02155.slice. Jul 10 23:39:22.013797 systemd[1]: kubepods-burstable-podcc603403_5115_4834_a76c_39beadd02155.slice: Consumed 8.322s CPU time, 123.9M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:39:22.298333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08-rootfs.mount: Deactivated successfully. Jul 10 23:39:22.298493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08-shm.mount: Deactivated successfully. Jul 10 23:39:22.298615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439-rootfs.mount: Deactivated successfully. Jul 10 23:39:22.298703 systemd[1]: var-lib-kubelet-pods-cc603403\x2d5115\x2d4834\x2da76c\x2d39beadd02155-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2526g.mount: Deactivated successfully. Jul 10 23:39:22.298796 systemd[1]: var-lib-kubelet-pods-fd35a0a3\x2d79f2\x2d4fce\x2dac2a\x2d9a7b237f8f7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9ts8.mount: Deactivated successfully. Jul 10 23:39:22.298960 systemd[1]: var-lib-kubelet-pods-cc603403\x2d5115\x2d4834\x2da76c\x2d39beadd02155-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 23:39:22.299203 systemd[1]: var-lib-kubelet-pods-cc603403\x2d5115\x2d4834\x2da76c\x2d39beadd02155-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 23:39:22.583226 kubelet[2691]: I0710 23:39:22.583098 2691 scope.go:117] "RemoveContainer" containerID="e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281" Jul 10 23:39:22.588641 containerd[1496]: time="2025-07-10T23:39:22.588170535Z" level=info msg="RemoveContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\"" Jul 10 23:39:22.593744 containerd[1496]: time="2025-07-10T23:39:22.593663786Z" level=info msg="RemoveContainer for \"e596bb2c5ebad7075a4fa6fc29141a22d80e723e7068c545abd886cbf3f4c281\" returns successfully" Jul 10 23:39:22.594203 kubelet[2691]: I0710 23:39:22.594054 2691 scope.go:117] "RemoveContainer" containerID="6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5" Jul 10 23:39:22.597931 containerd[1496]: time="2025-07-10T23:39:22.597471006Z" level=info msg="RemoveContainer for \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\"" Jul 10 23:39:22.603208 containerd[1496]: time="2025-07-10T23:39:22.603161535Z" level=info msg="RemoveContainer for \"6579b5cbc8d89e8405994e0d71f23da923905f39fb3fd1be52b5547d3d78d5c5\" returns successfully" Jul 10 23:39:22.603819 kubelet[2691]: I0710 23:39:22.603720 2691 scope.go:117] "RemoveContainer" containerID="e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba" Jul 10 23:39:22.605324 containerd[1496]: time="2025-07-10T23:39:22.605286484Z" level=info msg="RemoveContainer for \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\"" Jul 10 23:39:22.610745 containerd[1496]: time="2025-07-10T23:39:22.610693175Z" level=info msg="RemoveContainer for \"e9eeeb716616eb705aee004a5db0d44c16e77d3094e5730b93beedfeab1637ba\" returns successfully" Jul 10 23:39:22.611288 kubelet[2691]: I0710 23:39:22.611018 2691 scope.go:117] "RemoveContainer" containerID="a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496" Jul 10 23:39:22.613636 containerd[1496]: time="2025-07-10T23:39:22.613589480Z" level=info msg="RemoveContainer for \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\"" Jul 10 23:39:22.618606 containerd[1496]: time="2025-07-10T23:39:22.618538253Z" level=info msg="RemoveContainer for \"a9d83e946805830497cc128af6a416a6b673bcf8bd9ee60ddaa96d0485cab496\" returns successfully" Jul 10 23:39:22.619174 kubelet[2691]: I0710 23:39:22.619040 2691 scope.go:117] "RemoveContainer" containerID="e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df" Jul 10 23:39:22.621085 containerd[1496]: time="2025-07-10T23:39:22.621013000Z" level=info msg="RemoveContainer for \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\"" Jul 10 23:39:22.624706 containerd[1496]: time="2025-07-10T23:39:22.624636901Z" level=info msg="RemoveContainer for \"e84c1820f3a3330ad99a9fb593f321fb798c41f274942e1df546c989033388df\" returns successfully" Jul 10 23:39:23.357420 sshd[4280]: Connection closed by 139.178.89.65 port 37910 Jul 10 23:39:23.358222 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:23.363627 systemd[1]: sshd@23-49.13.217.224:22-139.178.89.65:37910.service: Deactivated successfully. Jul 10 23:39:23.370177 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:39:23.371026 systemd[1]: session-20.scope: Consumed 1.260s CPU time, 23.6M memory peak. Jul 10 23:39:23.372283 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:39:23.373494 systemd-logind[1470]: Removed session 20. Jul 10 23:39:23.533973 systemd[1]: Started sshd@24-49.13.217.224:22-139.178.89.65:38360.service - OpenSSH per-connection server daemon (139.178.89.65:38360). Jul 10 23:39:24.004949 kubelet[2691]: I0710 23:39:24.003825 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc603403-5115-4834-a76c-39beadd02155" path="/var/lib/kubelet/pods/cc603403-5115-4834-a76c-39beadd02155/volumes" Jul 10 23:39:24.148308 kubelet[2691]: E0710 23:39:24.148128 2691 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:39:24.532224 sshd[4448]: Accepted publickey for core from 139.178.89.65 port 38360 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:24.533591 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:24.542699 systemd-logind[1470]: New session 21 of user core. Jul 10 23:39:24.548666 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:39:26.398743 kubelet[2691]: E0710 23:39:26.398018 2691 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-2-1-n-56a4dae949\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-1-n-56a4dae949' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Jul 10 23:39:26.398743 kubelet[2691]: I0710 23:39:26.398077 2691 status_manager.go:895] "Failed to get status for pod" podUID="1493342c-5346-4f66-935f-19787d6d26bf" pod="kube-system/cilium-clmm8" err="pods \"cilium-clmm8\" is forbidden: User \"system:node:ci-4230-2-1-n-56a4dae949\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-1-n-56a4dae949' and this object" Jul 10 23:39:26.399685 systemd[1]: Created slice kubepods-burstable-pod1493342c_5346_4f66_935f_19787d6d26bf.slice - libcontainer container kubepods-burstable-pod1493342c_5346_4f66_935f_19787d6d26bf.slice. Jul 10 23:39:26.539404 kubelet[2691]: I0710 23:39:26.539349 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-cilium-run\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.540568 kubelet[2691]: I0710 23:39:26.540486 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4st6\" (UniqueName: \"kubernetes.io/projected/1493342c-5346-4f66-935f-19787d6d26bf-kube-api-access-c4st6\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.540896 kubelet[2691]: I0710 23:39:26.540877 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-bpf-maps\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.541147 kubelet[2691]: I0710 23:39:26.541095 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-cni-path\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.541147 kubelet[2691]: I0710 23:39:26.541124 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1493342c-5346-4f66-935f-19787d6d26bf-cilium-ipsec-secrets\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543382 kubelet[2691]: I0710 23:39:26.543331 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-etc-cni-netd\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543464 kubelet[2691]: I0710 23:39:26.543392 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-lib-modules\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543464 kubelet[2691]: I0710 23:39:26.543414 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-xtables-lock\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543464 kubelet[2691]: I0710 23:39:26.543431 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1493342c-5346-4f66-935f-19787d6d26bf-cilium-config-path\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543542 kubelet[2691]: I0710 23:39:26.543462 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1493342c-5346-4f66-935f-19787d6d26bf-hubble-tls\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543684 kubelet[2691]: I0710 23:39:26.543661 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-hostproc\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543732 kubelet[2691]: I0710 23:39:26.543695 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-host-proc-sys-kernel\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543782 kubelet[2691]: I0710 23:39:26.543753 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-cilium-cgroup\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.543835 kubelet[2691]: I0710 23:39:26.543780 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1493342c-5346-4f66-935f-19787d6d26bf-clustermesh-secrets\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.544040 kubelet[2691]: I0710 23:39:26.544013 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1493342c-5346-4f66-935f-19787d6d26bf-host-proc-sys-net\") pod \"cilium-clmm8\" (UID: \"1493342c-5346-4f66-935f-19787d6d26bf\") " pod="kube-system/cilium-clmm8" Jul 10 23:39:26.559359 sshd[4450]: Connection closed by 139.178.89.65 port 38360 Jul 10 23:39:26.559173 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:26.565184 systemd[1]: sshd@24-49.13.217.224:22-139.178.89.65:38360.service: Deactivated successfully. Jul 10 23:39:26.567882 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:39:26.568297 systemd[1]: session-21.scope: Consumed 1.203s CPU time, 25.6M memory peak. Jul 10 23:39:26.569023 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:39:26.570712 systemd-logind[1470]: Removed session 21. Jul 10 23:39:26.742698 kubelet[2691]: I0710 23:39:26.741548 2691 setters.go:618] "Node became not ready" node="ci-4230-2-1-n-56a4dae949" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T23:39:26Z","lastTransitionTime":"2025-07-10T23:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 23:39:26.742726 systemd[1]: Started sshd@25-49.13.217.224:22-139.178.89.65:38372.service - OpenSSH per-connection server daemon (139.178.89.65:38372). Jul 10 23:39:27.646857 kubelet[2691]: E0710 23:39:27.646724 2691 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 10 23:39:27.646857 kubelet[2691]: E0710 23:39:27.646852 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1493342c-5346-4f66-935f-19787d6d26bf-cilium-ipsec-secrets podName:1493342c-5346-4f66-935f-19787d6d26bf nodeName:}" failed. No retries permitted until 2025-07-10 23:39:28.146831402 +0000 UTC m=+164.291325141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1493342c-5346-4f66-935f-19787d6d26bf-cilium-ipsec-secrets") pod "cilium-clmm8" (UID: "1493342c-5346-4f66-935f-19787d6d26bf") : failed to sync secret cache: timed out waiting for the condition Jul 10 23:39:27.736361 sshd[4463]: Accepted publickey for core from 139.178.89.65 port 38372 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:27.739586 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:27.747530 systemd-logind[1470]: New session 22 of user core. Jul 10 23:39:27.756405 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:39:28.207395 containerd[1496]: time="2025-07-10T23:39:28.206987679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clmm8,Uid:1493342c-5346-4f66-935f-19787d6d26bf,Namespace:kube-system,Attempt:0,}" Jul 10 23:39:28.237275 containerd[1496]: time="2025-07-10T23:39:28.237138484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:39:28.237454 containerd[1496]: time="2025-07-10T23:39:28.237430402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:39:28.237569 containerd[1496]: time="2025-07-10T23:39:28.237540282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:39:28.237746 containerd[1496]: time="2025-07-10T23:39:28.237716841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:39:28.271518 systemd[1]: Started cri-containerd-ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea.scope - libcontainer container ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea. Jul 10 23:39:28.302708 containerd[1496]: time="2025-07-10T23:39:28.302632908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clmm8,Uid:1493342c-5346-4f66-935f-19787d6d26bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\"" Jul 10 23:39:28.311035 containerd[1496]: time="2025-07-10T23:39:28.310889186Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:39:28.331554 containerd[1496]: time="2025-07-10T23:39:28.331401161Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83\"" Jul 10 23:39:28.334747 containerd[1496]: time="2025-07-10T23:39:28.333563469Z" level=info msg="StartContainer for \"9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83\"" Jul 10 23:39:28.364530 systemd[1]: Started cri-containerd-9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83.scope - libcontainer container 9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83. Jul 10 23:39:28.403516 containerd[1496]: time="2025-07-10T23:39:28.403330512Z" level=info msg="StartContainer for \"9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83\" returns successfully" Jul 10 23:39:28.414395 systemd[1]: cri-containerd-9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83.scope: Deactivated successfully. Jul 10 23:39:28.418879 sshd[4465]: Connection closed by 139.178.89.65 port 38372 Jul 10 23:39:28.419548 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:28.426516 systemd[1]: sshd@25-49.13.217.224:22-139.178.89.65:38372.service: Deactivated successfully. Jul 10 23:39:28.431107 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:39:28.433318 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:39:28.436092 systemd-logind[1470]: Removed session 22. Jul 10 23:39:28.458065 containerd[1496]: time="2025-07-10T23:39:28.457696913Z" level=info msg="shim disconnected" id=9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83 namespace=k8s.io Jul 10 23:39:28.458065 containerd[1496]: time="2025-07-10T23:39:28.457781713Z" level=warning msg="cleaning up after shim disconnected" id=9f54a61b28bed1d49cdd87149c330ca6083764979a5e4e4cca916ac2c5e44e83 namespace=k8s.io Jul 10 23:39:28.458065 containerd[1496]: time="2025-07-10T23:39:28.457792312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:28.596885 systemd[1]: Started sshd@26-49.13.217.224:22-139.178.89.65:38382.service - OpenSSH per-connection server daemon (139.178.89.65:38382). Jul 10 23:39:28.618006 containerd[1496]: time="2025-07-10T23:39:28.617958891Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:39:28.643939 containerd[1496]: time="2025-07-10T23:39:28.643788359Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc\"" Jul 10 23:39:28.646889 containerd[1496]: time="2025-07-10T23:39:28.646841143Z" level=info msg="StartContainer for \"f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc\"" Jul 10 23:39:28.679583 systemd[1]: Started cri-containerd-f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc.scope - libcontainer container f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc. Jul 10 23:39:28.713051 containerd[1496]: time="2025-07-10T23:39:28.712834565Z" level=info msg="StartContainer for \"f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc\" returns successfully" Jul 10 23:39:28.722686 systemd[1]: cri-containerd-f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc.scope: Deactivated successfully. Jul 10 23:39:28.752482 containerd[1496]: time="2025-07-10T23:39:28.752141203Z" level=info msg="shim disconnected" id=f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc namespace=k8s.io Jul 10 23:39:28.752482 containerd[1496]: time="2025-07-10T23:39:28.752229803Z" level=warning msg="cleaning up after shim disconnected" id=f5bf6c1e732248dc7d0c3be65a7426fafed18421ab6b6884bb051e8691e0b6bc namespace=k8s.io Jul 10 23:39:28.752482 containerd[1496]: time="2025-07-10T23:39:28.752281162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:28.768531 containerd[1496]: time="2025-07-10T23:39:28.768479519Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:39:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:39:29.149719 kubelet[2691]: E0710 23:39:29.149432 2691 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:39:29.598886 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 38382 ssh2: RSA SHA256:MoklUjq/dL2kXNgOLT61WMTb9cxmsEMJ+tck8UOfYFU Jul 10 23:39:29.604081 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:29.635549 systemd-logind[1470]: New session 23 of user core. Jul 10 23:39:29.639600 containerd[1496]: time="2025-07-10T23:39:29.639422952Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:39:29.645474 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 23:39:29.662645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655031849.mount: Deactivated successfully. Jul 10 23:39:29.664055 containerd[1496]: time="2025-07-10T23:39:29.663395870Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009\"" Jul 10 23:39:29.667697 containerd[1496]: time="2025-07-10T23:39:29.667651088Z" level=info msg="StartContainer for \"acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009\"" Jul 10 23:39:29.729642 systemd[1]: Started cri-containerd-acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009.scope - libcontainer container acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009. Jul 10 23:39:29.774165 containerd[1496]: time="2025-07-10T23:39:29.774108466Z" level=info msg="StartContainer for \"acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009\" returns successfully" Jul 10 23:39:29.781996 systemd[1]: cri-containerd-acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009.scope: Deactivated successfully. Jul 10 23:39:29.815070 containerd[1496]: time="2025-07-10T23:39:29.814991537Z" level=info msg="shim disconnected" id=acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009 namespace=k8s.io Jul 10 23:39:29.815070 containerd[1496]: time="2025-07-10T23:39:29.815056777Z" level=warning msg="cleaning up after shim disconnected" id=acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009 namespace=k8s.io Jul 10 23:39:29.815070 containerd[1496]: time="2025-07-10T23:39:29.815066417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:30.169611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acd5486a6f124515ee03feca81384866a1d8d0e62ffc547ba3fcae0a75555009-rootfs.mount: Deactivated successfully. Jul 10 23:39:30.636678 containerd[1496]: time="2025-07-10T23:39:30.636598967Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:39:30.658748 containerd[1496]: time="2025-07-10T23:39:30.658673815Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0\"" Jul 10 23:39:30.660534 containerd[1496]: time="2025-07-10T23:39:30.660484126Z" level=info msg="StartContainer for \"b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0\"" Jul 10 23:39:30.701707 systemd[1]: Started cri-containerd-b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0.scope - libcontainer container b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0. Jul 10 23:39:30.735093 systemd[1]: cri-containerd-b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0.scope: Deactivated successfully. Jul 10 23:39:30.739428 containerd[1496]: time="2025-07-10T23:39:30.739112167Z" level=info msg="StartContainer for \"b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0\" returns successfully" Jul 10 23:39:30.768064 containerd[1496]: time="2025-07-10T23:39:30.767752862Z" level=info msg="shim disconnected" id=b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0 namespace=k8s.io Jul 10 23:39:30.768064 containerd[1496]: time="2025-07-10T23:39:30.767837822Z" level=warning msg="cleaning up after shim disconnected" id=b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0 namespace=k8s.io Jul 10 23:39:30.768064 containerd[1496]: time="2025-07-10T23:39:30.767854542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:30.783698 containerd[1496]: time="2025-07-10T23:39:30.783567902Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:39:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:39:31.164615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14f97fa11ae9382b2e943385e5fcce855abfc31dbcab7e133ce8817626506e0-rootfs.mount: Deactivated successfully. Jul 10 23:39:31.654778 containerd[1496]: time="2025-07-10T23:39:31.654681825Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:39:31.685705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501157471.mount: Deactivated successfully. Jul 10 23:39:31.690898 containerd[1496]: time="2025-07-10T23:39:31.690846362Z" level=info msg="CreateContainer within sandbox \"ba253e1f4df37725ad997f84388b2a2a57a9904aaa36f207eda1a97e05365bea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629\"" Jul 10 23:39:31.692540 containerd[1496]: time="2025-07-10T23:39:31.692443754Z" level=info msg="StartContainer for \"40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629\"" Jul 10 23:39:31.728487 systemd[1]: Started cri-containerd-40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629.scope - libcontainer container 40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629. Jul 10 23:39:31.765213 containerd[1496]: time="2025-07-10T23:39:31.765151188Z" level=info msg="StartContainer for \"40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629\" returns successfully" Jul 10 23:39:32.097281 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 23:39:32.166268 systemd[1]: run-containerd-runc-k8s.io-40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629-runc.o69cuB.mount: Deactivated successfully. Jul 10 23:39:32.671114 kubelet[2691]: I0710 23:39:32.671023 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-clmm8" podStartSLOduration=6.670985599 podStartE2EDuration="6.670985599s" podCreationTimestamp="2025-07-10 23:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:39:32.66886453 +0000 UTC m=+168.813358269" watchObservedRunningTime="2025-07-10 23:39:32.670985599 +0000 UTC m=+168.815479418" Jul 10 23:39:34.391432 systemd[1]: run-containerd-runc-k8s.io-40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629-runc.z69FBA.mount: Deactivated successfully. Jul 10 23:39:35.270923 systemd-networkd[1374]: lxc_health: Link UP Jul 10 23:39:35.283422 systemd-networkd[1374]: lxc_health: Gained carrier Jul 10 23:39:36.950217 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 10 23:39:40.983318 systemd[1]: run-containerd-runc-k8s.io-40f0200929fc243de452ca869abbfe5ffd06e7ab662939063fc2293f2a336629-runc.nwDKBF.mount: Deactivated successfully. Jul 10 23:39:41.217859 sshd[4639]: Connection closed by 139.178.89.65 port 38382 Jul 10 23:39:41.218543 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:41.224153 systemd[1]: sshd@26-49.13.217.224:22-139.178.89.65:38382.service: Deactivated successfully. Jul 10 23:39:41.228508 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 23:39:41.232872 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Jul 10 23:39:41.234417 systemd-logind[1470]: Removed session 23. Jul 10 23:39:44.025039 containerd[1496]: time="2025-07-10T23:39:44.024634535Z" level=info msg="StopPodSandbox for \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\"" Jul 10 23:39:44.025039 containerd[1496]: time="2025-07-10T23:39:44.024768656Z" level=info msg="TearDown network for sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" successfully" Jul 10 23:39:44.025039 containerd[1496]: time="2025-07-10T23:39:44.024782697Z" level=info msg="StopPodSandbox for \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" returns successfully" Jul 10 23:39:44.028345 containerd[1496]: time="2025-07-10T23:39:44.026641798Z" level=info msg="RemovePodSandbox for \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\"" Jul 10 23:39:44.028345 containerd[1496]: time="2025-07-10T23:39:44.026721239Z" level=info msg="Forcibly stopping sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\"" Jul 10 23:39:44.028345 containerd[1496]: time="2025-07-10T23:39:44.026812600Z" level=info msg="TearDown network for sandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" successfully" Jul 10 23:39:44.032623 containerd[1496]: time="2025-07-10T23:39:44.032479983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:39:44.032847 containerd[1496]: time="2025-07-10T23:39:44.032652665Z" level=info msg="RemovePodSandbox \"3bab25809635c043e2778a9910089c0cc5b74b2731595651366f480726787e08\" returns successfully" Jul 10 23:39:44.033893 containerd[1496]: time="2025-07-10T23:39:44.033617916Z" level=info msg="StopPodSandbox for \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\"" Jul 10 23:39:44.034022 containerd[1496]: time="2025-07-10T23:39:44.033983600Z" level=info msg="TearDown network for sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" successfully" Jul 10 23:39:44.034071 containerd[1496]: time="2025-07-10T23:39:44.034022161Z" level=info msg="StopPodSandbox for \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" returns successfully" Jul 10 23:39:44.034773 containerd[1496]: time="2025-07-10T23:39:44.034622848Z" level=info msg="RemovePodSandbox for \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\"" Jul 10 23:39:44.034773 containerd[1496]: time="2025-07-10T23:39:44.034764969Z" level=info msg="Forcibly stopping sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\"" Jul 10 23:39:44.034903 containerd[1496]: time="2025-07-10T23:39:44.034848170Z" level=info msg="TearDown network for sandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" successfully" Jul 10 23:39:44.039934 containerd[1496]: time="2025-07-10T23:39:44.039859947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:39:44.040048 containerd[1496]: time="2025-07-10T23:39:44.039981268Z" level=info msg="RemovePodSandbox \"57c9868f13debd723220500ffc6a596e8a615fa44c73b178a98e4a44e9265439\" returns successfully" Jul 10 23:39:53.495853 systemd[1]: Started sshd@27-49.13.217.224:22-103.99.206.83:52034.service - OpenSSH per-connection server daemon (103.99.206.83:52034). Jul 10 23:39:53.835115 sshd[5413]: Connection closed by 103.99.206.83 port 52034 [preauth] Jul 10 23:39:53.836675 systemd[1]: sshd@27-49.13.217.224:22-103.99.206.83:52034.service: Deactivated successfully. Jul 10 23:39:56.872198 kubelet[2691]: E0710 23:39:56.871661 2691 controller.go:195] "Failed to update lease" err="Put \"https://49.13.217.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-n-56a4dae949?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 10 23:39:57.311902 kubelet[2691]: E0710 23:39:57.311830 2691 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52552->10.0.0.2:2379: read: connection timed out" Jul 10 23:39:57.360799 systemd[1]: cri-containerd-33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a.scope: Deactivated successfully. Jul 10 23:39:57.361134 systemd[1]: cri-containerd-33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a.scope: Consumed 5.406s CPU time, 57.7M memory peak. Jul 10 23:39:57.403300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a-rootfs.mount: Deactivated successfully. Jul 10 23:39:57.417141 containerd[1496]: time="2025-07-10T23:39:57.416770681Z" level=info msg="shim disconnected" id=33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a namespace=k8s.io Jul 10 23:39:57.418166 containerd[1496]: time="2025-07-10T23:39:57.416881642Z" level=warning msg="cleaning up after shim disconnected" id=33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a namespace=k8s.io Jul 10 23:39:57.418166 containerd[1496]: time="2025-07-10T23:39:57.417927491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:57.724448 kubelet[2691]: I0710 23:39:57.724382 2691 scope.go:117] "RemoveContainer" containerID="33a4282b3a5dc1be2e1b93e8f805a0bded913628cce0dab07a64986aaad3f78a" Jul 10 23:39:57.728396 containerd[1496]: time="2025-07-10T23:39:57.728337521Z" level=info msg="CreateContainer within sandbox \"3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 10 23:39:57.751632 containerd[1496]: time="2025-07-10T23:39:57.751380882Z" level=info msg="CreateContainer within sandbox \"3bf91a77eb3fd53f7f1cc252db5cb26cfd842b1ac19b5b35d167be0f8349ac91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb4cec3e1622e60fdc25696d02e0af626f09e361d5991532c85118fffa430d7c\"" Jul 10 23:39:57.754078 containerd[1496]: time="2025-07-10T23:39:57.752542012Z" level=info msg="StartContainer for \"cb4cec3e1622e60fdc25696d02e0af626f09e361d5991532c85118fffa430d7c\"" Jul 10 23:39:57.799522 systemd[1]: Started cri-containerd-cb4cec3e1622e60fdc25696d02e0af626f09e361d5991532c85118fffa430d7c.scope - libcontainer container cb4cec3e1622e60fdc25696d02e0af626f09e361d5991532c85118fffa430d7c. Jul 10 23:39:57.849775 containerd[1496]: time="2025-07-10T23:39:57.849642780Z" level=info msg="StartContainer for \"cb4cec3e1622e60fdc25696d02e0af626f09e361d5991532c85118fffa430d7c\" returns successfully" Jul 10 23:40:00.695799 kubelet[2691]: E0710 23:40:00.695643 2691 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52344->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-1-n-56a4dae949.1851083d7d95effb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-1-n-56a4dae949,UID:4b590dbfa799512150261657d199c8d1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-n-56a4dae949,},FirstTimestamp:2025-07-10 23:39:50.216839163 +0000 UTC m=+186.361332902,LastTimestamp:2025-07-10 23:39:50.216839163 +0000 UTC m=+186.361332902,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-n-56a4dae949,}"