Jun 20 19:08:01.884169 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 20 19:08:01.884194 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jun 20 17:15:00 -00 2025 Jun 20 19:08:01.884205 kernel: KASLR enabled Jun 20 19:08:01.884212 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jun 20 19:08:01.884218 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jun 20 19:08:01.884224 kernel: random: crng init done Jun 20 19:08:01.884232 kernel: secureboot: Secure boot disabled Jun 20 19:08:01.884238 kernel: ACPI: Early table checksum verification disabled Jun 20 19:08:01.884245 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jun 20 19:08:01.884253 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jun 20 19:08:01.884260 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884267 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884273 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884280 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884288 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884297 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884304 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884311 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884318 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:01.884324 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jun 20 19:08:01.884331 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jun 20 19:08:01.884338 kernel: NUMA: Failed to initialise from firmware Jun 20 19:08:01.884345 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jun 20 19:08:01.884352 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jun 20 19:08:01.884359 kernel: Zone ranges: Jun 20 19:08:01.884367 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 20 19:08:01.884374 kernel: DMA32 empty Jun 20 19:08:01.884381 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jun 20 19:08:01.884388 kernel: Movable zone start for each node Jun 20 19:08:01.884395 kernel: Early memory node ranges Jun 20 19:08:01.884402 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jun 20 19:08:01.884408 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jun 20 19:08:01.884415 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jun 20 19:08:01.884422 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jun 20 19:08:01.884429 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jun 20 19:08:01.884435 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jun 20 19:08:01.884442 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jun 20 19:08:01.884451 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jun 20 19:08:01.884458 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jun 20 19:08:01.884465 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jun 20 19:08:01.884475 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jun 20 19:08:01.884482 kernel: psci: probing for conduit method from ACPI. Jun 20 19:08:01.884489 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 19:08:01.884498 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 19:08:01.884506 kernel: psci: Trusted OS migration not required Jun 20 19:08:01.884513 kernel: psci: SMC Calling Convention v1.1 Jun 20 19:08:01.884520 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 20 19:08:01.884528 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jun 20 19:08:01.884535 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jun 20 19:08:01.884558 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 19:08:01.884566 kernel: Detected PIPT I-cache on CPU0 Jun 20 19:08:01.884573 kernel: CPU features: detected: GIC system register CPU interface Jun 20 19:08:01.884581 kernel: CPU features: detected: Hardware dirty bit management Jun 20 19:08:01.884590 kernel: CPU features: detected: Spectre-v4 Jun 20 19:08:01.884597 kernel: CPU features: detected: Spectre-BHB Jun 20 19:08:01.884604 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 19:08:01.884612 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 19:08:01.884619 kernel: CPU features: detected: ARM erratum 1418040 Jun 20 19:08:01.884626 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 19:08:01.884633 kernel: alternatives: applying boot alternatives Jun 20 19:08:01.884642 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 19:08:01.884650 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:08:01.884657 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:08:01.884664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:08:01.884673 kernel: Fallback order for Node 0: 0 Jun 20 19:08:01.884681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jun 20 19:08:01.884688 kernel: Policy zone: Normal Jun 20 19:08:01.884695 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:08:01.884702 kernel: software IO TLB: area num 2. Jun 20 19:08:01.884710 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jun 20 19:08:01.884718 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Jun 20 19:08:01.884725 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:08:01.884733 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:08:01.884740 kernel: rcu: RCU event tracing is enabled. Jun 20 19:08:01.884748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:08:01.884755 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:08:01.884765 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:08:01.884772 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:08:01.884779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:08:01.884786 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 19:08:01.884794 kernel: GICv3: 256 SPIs implemented Jun 20 19:08:01.884801 kernel: GICv3: 0 Extended SPIs implemented Jun 20 19:08:01.884808 kernel: Root IRQ handler: gic_handle_irq Jun 20 19:08:01.884815 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 20 19:08:01.884822 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 20 19:08:01.884830 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 20 19:08:01.884837 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:08:01.884847 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jun 20 19:08:01.884854 kernel: GICv3: using LPI property table @0x00000001000e0000 Jun 20 19:08:01.884861 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jun 20 19:08:01.884869 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:08:01.884876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:08:01.884892 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 20 19:08:01.884901 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 20 19:08:01.884909 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 20 19:08:01.884916 kernel: Console: colour dummy device 80x25 Jun 20 19:08:01.884924 kernel: ACPI: Core revision 20230628 Jun 20 19:08:01.884932 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 20 19:08:01.884942 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:08:01.884949 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:08:01.884957 kernel: landlock: Up and running. Jun 20 19:08:01.884964 kernel: SELinux: Initializing. Jun 20 19:08:01.884972 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:08:01.884980 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:08:01.884987 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:08:01.884995 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:08:01.885002 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:08:01.885011 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:08:01.885019 kernel: Platform MSI: ITS@0x8080000 domain created Jun 20 19:08:01.885026 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 20 19:08:01.885034 kernel: Remapping and enabling EFI services. Jun 20 19:08:01.885041 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:08:01.885049 kernel: Detected PIPT I-cache on CPU1 Jun 20 19:08:01.885056 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 20 19:08:01.885064 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jun 20 19:08:01.885071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:08:01.885080 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 20 19:08:01.885088 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:08:01.885101 kernel: SMP: Total of 2 processors activated. Jun 20 19:08:01.885111 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 19:08:01.885119 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 19:08:01.885127 kernel: CPU features: detected: Common not Private translations Jun 20 19:08:01.885135 kernel: CPU features: detected: CRC32 instructions Jun 20 19:08:01.885142 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 20 19:08:01.885151 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 19:08:01.885160 kernel: CPU features: detected: LSE atomic instructions Jun 20 19:08:01.885168 kernel: CPU features: detected: Privileged Access Never Jun 20 19:08:01.885176 kernel: CPU features: detected: RAS Extension Support Jun 20 19:08:01.885185 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 19:08:01.885193 kernel: CPU: All CPU(s) started at EL1 Jun 20 19:08:01.885201 kernel: alternatives: applying system-wide alternatives Jun 20 19:08:01.885209 kernel: devtmpfs: initialized Jun 20 19:08:01.885217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:08:01.885226 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:08:01.885234 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:08:01.885242 kernel: SMBIOS 3.0.0 present. Jun 20 19:08:01.885250 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jun 20 19:08:01.885258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:08:01.885265 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 19:08:01.885273 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 19:08:01.885281 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 19:08:01.885289 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:08:01.885299 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jun 20 19:08:01.885307 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:08:01.885315 kernel: cpuidle: using governor menu Jun 20 19:08:01.885323 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 19:08:01.885331 kernel: ASID allocator initialised with 32768 entries Jun 20 19:08:01.885339 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:08:01.885347 kernel: Serial: AMBA PL011 UART driver Jun 20 19:08:01.885355 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 19:08:01.885363 kernel: Modules: 0 pages in range for non-PLT usage Jun 20 19:08:01.885372 kernel: Modules: 509264 pages in range for PLT usage Jun 20 19:08:01.885380 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:08:01.885388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:08:01.885396 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 19:08:01.885404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 19:08:01.885412 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:08:01.885420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:08:01.885428 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 19:08:01.885436 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 19:08:01.885445 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:08:01.885453 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:08:01.885461 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:08:01.885469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:08:01.885477 kernel: ACPI: Interpreter enabled Jun 20 19:08:01.885485 kernel: ACPI: Using GIC for interrupt routing Jun 20 19:08:01.885493 kernel: ACPI: MCFG table detected, 1 entries Jun 20 19:08:01.885501 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 20 19:08:01.885509 kernel: printk: console [ttyAMA0] enabled Jun 20 19:08:01.885518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:08:01.887847 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:08:01.887968 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 20 19:08:01.888037 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 20 19:08:01.888101 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 20 19:08:01.888163 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 20 19:08:01.888173 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 20 19:08:01.888188 kernel: PCI host bridge to bus 0000:00 Jun 20 19:08:01.888266 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 20 19:08:01.888325 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 20 19:08:01.888382 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 20 19:08:01.888438 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:08:01.888517 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 20 19:08:01.888612 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jun 20 19:08:01.888685 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jun 20 19:08:01.888752 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jun 20 19:08:01.888826 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.888926 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jun 20 19:08:01.889009 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.889077 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jun 20 19:08:01.889156 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.889225 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jun 20 19:08:01.889299 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.889365 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jun 20 19:08:01.889442 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.889510 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jun 20 19:08:01.891716 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.891802 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jun 20 19:08:01.891878 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.891960 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jun 20 19:08:01.892035 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.892103 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jun 20 19:08:01.892183 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:01.892249 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jun 20 19:08:01.892324 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jun 20 19:08:01.892390 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jun 20 19:08:01.892469 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:08:01.892549 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jun 20 19:08:01.892631 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 20 19:08:01.892701 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jun 20 19:08:01.892777 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 20 19:08:01.892844 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jun 20 19:08:01.892935 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jun 20 19:08:01.893009 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jun 20 19:08:01.893077 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jun 20 19:08:01.893162 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jun 20 19:08:01.893232 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jun 20 19:08:01.893308 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 20 19:08:01.893379 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jun 20 19:08:01.893447 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jun 20 19:08:01.893521 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jun 20 19:08:01.895705 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jun 20 19:08:01.895805 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jun 20 19:08:01.895924 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:08:01.896010 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jun 20 19:08:01.896079 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jun 20 19:08:01.896148 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jun 20 19:08:01.896225 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:08:01.896290 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jun 20 19:08:01.896354 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jun 20 19:08:01.896422 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:08:01.896489 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:08:01.897659 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jun 20 19:08:01.897757 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:08:01.897823 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:08:01.897911 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:08:01.897990 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:08:01.898055 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jun 20 19:08:01.898118 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:08:01.898185 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jun 20 19:08:01.898248 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jun 20 19:08:01.898311 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jun 20 19:08:01.898386 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 20 19:08:01.898450 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jun 20 19:08:01.898513 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jun 20 19:08:01.900671 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:08:01.900758 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jun 20 19:08:01.900824 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jun 20 19:08:01.900928 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:08:01.901005 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jun 20 19:08:01.901077 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jun 20 19:08:01.901146 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:08:01.901211 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jun 20 19:08:01.901273 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jun 20 19:08:01.901340 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jun 20 19:08:01.901404 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:01.901470 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jun 20 19:08:01.901537 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:01.901708 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jun 20 19:08:01.901773 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:01.901839 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jun 20 19:08:01.901920 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:01.901989 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jun 20 19:08:01.902052 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:01.902123 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jun 20 19:08:01.902190 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:01.902254 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jun 20 19:08:01.902318 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:01.902384 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jun 20 19:08:01.902447 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:01.902514 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jun 20 19:08:01.903629 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:01.903714 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jun 20 19:08:01.903790 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jun 20 19:08:01.903858 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jun 20 19:08:01.903964 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jun 20 19:08:01.904036 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jun 20 19:08:01.904102 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 19:08:01.904175 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jun 20 19:08:01.904239 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jun 20 19:08:01.904306 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jun 20 19:08:01.904370 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jun 20 19:08:01.904438 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jun 20 19:08:01.904502 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jun 20 19:08:01.904591 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jun 20 19:08:01.904659 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jun 20 19:08:01.904729 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jun 20 19:08:01.904794 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jun 20 19:08:01.904860 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jun 20 19:08:01.904937 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jun 20 19:08:01.905003 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jun 20 19:08:01.905067 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jun 20 19:08:01.905135 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jun 20 19:08:01.905205 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jun 20 19:08:01.905276 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 20 19:08:01.905343 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jun 20 19:08:01.905406 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 19:08:01.905469 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jun 20 19:08:01.905532 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jun 20 19:08:01.907772 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:01.907853 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jun 20 19:08:01.907944 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 19:08:01.908011 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jun 20 19:08:01.908074 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jun 20 19:08:01.908137 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:01.908209 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jun 20 19:08:01.908279 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jun 20 19:08:01.908344 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 19:08:01.908408 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jun 20 19:08:01.908470 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jun 20 19:08:01.908534 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:01.908622 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jun 20 19:08:01.908688 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 19:08:01.908753 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jun 20 19:08:01.908823 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jun 20 19:08:01.908899 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:01.908976 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jun 20 19:08:01.909043 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jun 20 19:08:01.909109 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 19:08:01.909173 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jun 20 19:08:01.909237 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jun 20 19:08:01.909302 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:01.909376 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jun 20 19:08:01.909444 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jun 20 19:08:01.909508 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 19:08:01.911716 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jun 20 19:08:01.911800 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jun 20 19:08:01.911865 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:01.911981 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jun 20 19:08:01.912066 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jun 20 19:08:01.912142 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jun 20 19:08:01.912209 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 19:08:01.912276 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jun 20 19:08:01.912339 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jun 20 19:08:01.912404 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:01.912472 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 19:08:01.912536 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jun 20 19:08:01.913627 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jun 20 19:08:01.913710 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:01.913779 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 19:08:01.913844 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jun 20 19:08:01.914656 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jun 20 19:08:01.914731 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:01.914815 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 20 19:08:01.914878 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 20 19:08:01.915013 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 20 19:08:01.915087 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jun 20 19:08:01.915148 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jun 20 19:08:01.915207 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:01.915274 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jun 20 19:08:01.915335 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jun 20 19:08:01.915394 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:01.915466 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jun 20 19:08:01.915549 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jun 20 19:08:01.915620 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:01.915691 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jun 20 19:08:01.915753 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jun 20 19:08:01.915813 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:01.915904 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jun 20 19:08:01.915975 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jun 20 19:08:01.916037 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:01.916103 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jun 20 19:08:01.916163 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jun 20 19:08:01.916225 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:01.916292 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jun 20 19:08:01.916350 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jun 20 19:08:01.916409 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:01.916474 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jun 20 19:08:01.916534 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jun 20 19:08:01.918637 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:01.918724 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jun 20 19:08:01.918791 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jun 20 19:08:01.918853 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:01.918864 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 20 19:08:01.918873 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 20 19:08:01.918893 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 20 19:08:01.918908 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 20 19:08:01.918919 kernel: iommu: Default domain type: Translated Jun 20 19:08:01.918928 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 19:08:01.918936 kernel: efivars: Registered efivars operations Jun 20 19:08:01.918945 kernel: vgaarb: loaded Jun 20 19:08:01.918953 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 19:08:01.918962 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:08:01.918971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:08:01.918979 kernel: pnp: PnP ACPI init Jun 20 19:08:01.919062 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 20 19:08:01.919076 kernel: pnp: PnP ACPI: found 1 devices Jun 20 19:08:01.919085 kernel: NET: Registered PF_INET protocol family Jun 20 19:08:01.919101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:08:01.919111 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:08:01.919120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:08:01.919129 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:08:01.919137 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:08:01.919145 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:08:01.919156 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:08:01.919165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:08:01.919174 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:08:01.919257 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jun 20 19:08:01.919269 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:08:01.919282 kernel: kvm [1]: HYP mode not available Jun 20 19:08:01.919291 kernel: Initialise system trusted keyrings Jun 20 19:08:01.919304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:08:01.919315 kernel: Key type asymmetric registered Jun 20 19:08:01.919327 kernel: Asymmetric key parser 'x509' registered Jun 20 19:08:01.919336 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:08:01.919344 kernel: io scheduler mq-deadline registered Jun 20 19:08:01.919353 kernel: io scheduler kyber registered Jun 20 19:08:01.919364 kernel: io scheduler bfq registered Jun 20 19:08:01.919374 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 20 19:08:01.919461 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jun 20 19:08:01.919532 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jun 20 19:08:01.920723 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.920797 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jun 20 19:08:01.920864 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jun 20 19:08:01.920977 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.921050 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jun 20 19:08:01.921116 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jun 20 19:08:01.921186 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.921255 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jun 20 19:08:01.921319 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jun 20 19:08:01.921383 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.921449 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jun 20 19:08:01.921514 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jun 20 19:08:01.923107 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.923193 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jun 20 19:08:01.923259 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jun 20 19:08:01.923324 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.925631 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jun 20 19:08:01.925720 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jun 20 19:08:01.925795 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.925864 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jun 20 19:08:01.925987 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jun 20 19:08:01.926058 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.926070 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jun 20 19:08:01.926135 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jun 20 19:08:01.926204 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jun 20 19:08:01.926269 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:01.926279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 19:08:01.926288 kernel: ACPI: button: Power Button [PWRB] Jun 20 19:08:01.926297 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 20 19:08:01.926367 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jun 20 19:08:01.926437 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jun 20 19:08:01.926448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:08:01.926457 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 20 19:08:01.926526 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jun 20 19:08:01.926537 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jun 20 19:08:01.926565 kernel: thunder_xcv, ver 1.0 Jun 20 19:08:01.926575 kernel: thunder_bgx, ver 1.0 Jun 20 19:08:01.926585 kernel: nicpf, ver 1.0 Jun 20 19:08:01.926596 kernel: nicvf, ver 1.0 Jun 20 19:08:01.926682 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 19:08:01.926747 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T19:08:01 UTC (1750446481) Jun 20 19:08:01.926760 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:08:01.926769 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 20 19:08:01.926777 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 20 19:08:01.926786 kernel: watchdog: Hard watchdog permanently disabled Jun 20 19:08:01.926794 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:08:01.926803 kernel: Segment Routing with IPv6 Jun 20 19:08:01.926811 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:08:01.926819 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:08:01.926828 kernel: Key type dns_resolver registered Jun 20 19:08:01.926838 kernel: registered taskstats version 1 Jun 20 19:08:01.926846 kernel: Loading compiled-in X.509 certificates Jun 20 19:08:01.926855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 8506faa781fda315da94c2790de0e5c860361c93' Jun 20 19:08:01.926863 kernel: Key type .fscrypt registered Jun 20 19:08:01.926871 kernel: Key type fscrypt-provisioning registered Jun 20 19:08:01.926880 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:08:01.926900 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:08:01.926908 kernel: ima: No architecture policies found Jun 20 19:08:01.926919 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 19:08:01.926928 kernel: clk: Disabling unused clocks Jun 20 19:08:01.926936 kernel: Freeing unused kernel memory: 38336K Jun 20 19:08:01.926945 kernel: Run /init as init process Jun 20 19:08:01.926953 kernel: with arguments: Jun 20 19:08:01.926962 kernel: /init Jun 20 19:08:01.926970 kernel: with environment: Jun 20 19:08:01.926978 kernel: HOME=/ Jun 20 19:08:01.926987 kernel: TERM=linux Jun 20 19:08:01.926995 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:08:01.927005 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:08:01.927018 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:08:01.927027 systemd[1]: Detected virtualization kvm. Jun 20 19:08:01.927035 systemd[1]: Detected architecture arm64. Jun 20 19:08:01.927044 systemd[1]: Running in initrd. Jun 20 19:08:01.927052 systemd[1]: No hostname configured, using default hostname. Jun 20 19:08:01.927061 systemd[1]: Hostname set to . Jun 20 19:08:01.927072 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:08:01.927080 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:08:01.927089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:01.927098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:01.927107 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:08:01.927116 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:08:01.927125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:08:01.927137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:08:01.927146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:08:01.927155 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:08:01.927164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:01.927173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:01.927182 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:08:01.927191 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:08:01.927199 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:08:01.927210 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:08:01.927220 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:08:01.927229 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:08:01.927238 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:08:01.927247 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:08:01.927256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:01.927264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:01.927282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:01.927294 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:08:01.927303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:08:01.927312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:08:01.927321 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:08:01.927330 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:08:01.927339 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:08:01.927348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:08:01.927357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:01.927366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:08:01.927401 systemd-journald[238]: Collecting audit messages is disabled. Jun 20 19:08:01.927423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:01.927434 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:08:01.927444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:08:01.927453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:01.927463 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:01.927472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:01.927481 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:08:01.927492 kernel: Bridge firewalling registered Jun 20 19:08:01.927501 systemd-journald[238]: Journal started Jun 20 19:08:01.927521 systemd-journald[238]: Runtime Journal (/run/log/journal/e18d9fc179c141dd855f11cc18e52835) is 8M, max 76.6M, 68.6M free. Jun 20 19:08:01.929986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:08:01.899230 systemd-modules-load[239]: Inserted module 'overlay' Jun 20 19:08:01.922345 systemd-modules-load[239]: Inserted module 'br_netfilter' Jun 20 19:08:01.933860 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:08:01.933286 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:01.937676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:01.940575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:01.946769 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:08:01.949161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:08:01.953801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:08:01.966602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:01.968655 dracut-cmdline[267]: dracut-dracut-053 Jun 20 19:08:01.969687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:01.975608 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 19:08:01.981730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:08:02.011377 systemd-resolved[286]: Positive Trust Anchors: Jun 20 19:08:02.011396 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:08:02.011427 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:08:02.016979 systemd-resolved[286]: Defaulting to hostname 'linux'. Jun 20 19:08:02.017985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:08:02.018631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:02.082593 kernel: SCSI subsystem initialized Jun 20 19:08:02.086583 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:08:02.094792 kernel: iscsi: registered transport (tcp) Jun 20 19:08:02.109644 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:08:02.109710 kernel: QLogic iSCSI HBA Driver Jun 20 19:08:02.163610 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:08:02.170794 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:08:02.188138 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:08:02.188197 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:08:02.188210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:08:02.238613 kernel: raid6: neonx8 gen() 15637 MB/s Jun 20 19:08:02.255582 kernel: raid6: neonx4 gen() 14794 MB/s Jun 20 19:08:02.272584 kernel: raid6: neonx2 gen() 12978 MB/s Jun 20 19:08:02.289587 kernel: raid6: neonx1 gen() 10290 MB/s Jun 20 19:08:02.306587 kernel: raid6: int64x8 gen() 6660 MB/s Jun 20 19:08:02.323615 kernel: raid6: int64x4 gen() 7215 MB/s Jun 20 19:08:02.340596 kernel: raid6: int64x2 gen() 6001 MB/s Jun 20 19:08:02.357598 kernel: raid6: int64x1 gen() 4971 MB/s Jun 20 19:08:02.357643 kernel: raid6: using algorithm neonx8 gen() 15637 MB/s Jun 20 19:08:02.374589 kernel: raid6: .... xor() 11753 MB/s, rmw enabled Jun 20 19:08:02.374653 kernel: raid6: using neon recovery algorithm Jun 20 19:08:02.379573 kernel: xor: measuring software checksum speed Jun 20 19:08:02.379628 kernel: 8regs : 21647 MB/sec Jun 20 19:08:02.380683 kernel: 32regs : 18929 MB/sec Jun 20 19:08:02.380721 kernel: arm64_neon : 27889 MB/sec Jun 20 19:08:02.380743 kernel: xor: using function: arm64_neon (27889 MB/sec) Jun 20 19:08:02.433610 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:08:02.447844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:08:02.454850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:02.471900 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jun 20 19:08:02.475964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:02.487022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:08:02.501021 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jun 20 19:08:02.537141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:08:02.545838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:08:02.595484 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:02.602750 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:08:02.619625 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:08:02.623627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:08:02.626213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:02.628029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:08:02.633737 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:08:02.656012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:08:02.706166 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:08:02.708606 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:08:02.708686 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 19:08:02.727960 kernel: ACPI: bus type USB registered Jun 20 19:08:02.728017 kernel: usbcore: registered new interface driver usbfs Jun 20 19:08:02.730953 kernel: usbcore: registered new interface driver hub Jun 20 19:08:02.731569 kernel: usbcore: registered new device driver usb Jun 20 19:08:02.757610 kernel: sr 0:0:0:0: Power-on or device reset occurred Jun 20 19:08:02.760562 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jun 20 19:08:02.760759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:08:02.763608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:08:02.766083 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:08:02.765211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:02.767957 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:02.768864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:02.769055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:02.771660 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:02.777591 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:08:02.777804 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jun 20 19:08:02.779401 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 20 19:08:02.782557 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:08:02.783018 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jun 20 19:08:02.783112 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jun 20 19:08:02.783559 kernel: hub 1-0:1.0: USB hub found Jun 20 19:08:02.784768 kernel: sd 0:0:0:1: Power-on or device reset occurred Jun 20 19:08:02.786024 kernel: hub 1-0:1.0: 4 ports detected Jun 20 19:08:02.786132 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jun 20 19:08:02.788046 kernel: sd 0:0:0:1: [sda] Write Protect is off Jun 20 19:08:02.788297 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jun 20 19:08:02.786466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:02.793771 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 19:08:02.793820 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:08:02.795650 kernel: hub 2-0:1.0: USB hub found Jun 20 19:08:02.795855 kernel: hub 2-0:1.0: 4 ports detected Jun 20 19:08:02.802624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:08:02.802662 kernel: GPT:17805311 != 80003071 Jun 20 19:08:02.802674 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:08:02.802684 kernel: GPT:17805311 != 80003071 Jun 20 19:08:02.802693 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:08:02.802702 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:02.804593 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jun 20 19:08:02.808655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:02.815714 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:02.855827 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:02.872137 kernel: BTRFS: device fsid c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (514) Jun 20 19:08:02.873600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (509) Jun 20 19:08:02.891870 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 19:08:02.900744 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 19:08:02.908060 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 19:08:02.909713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 19:08:02.920355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:08:02.927718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:08:02.946557 disk-uuid[575]: Primary Header is updated. Jun 20 19:08:02.946557 disk-uuid[575]: Secondary Entries is updated. Jun 20 19:08:02.946557 disk-uuid[575]: Secondary Header is updated. Jun 20 19:08:02.956581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:03.029745 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 20 19:08:03.165736 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jun 20 19:08:03.165802 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jun 20 19:08:03.166035 kernel: usbcore: registered new interface driver usbhid Jun 20 19:08:03.166569 kernel: usbhid: USB HID core driver Jun 20 19:08:03.272615 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jun 20 19:08:03.404601 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jun 20 19:08:03.457589 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jun 20 19:08:03.968462 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:03.968521 disk-uuid[576]: The operation has completed successfully. Jun 20 19:08:04.054814 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:08:04.054922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:08:04.075811 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:08:04.094278 sh[591]: Success Jun 20 19:08:04.114588 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 20 19:08:04.176017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:08:04.195739 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:08:04.198587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:08:04.230052 kernel: BTRFS info (device dm-0): first mount of filesystem c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f Jun 20 19:08:04.230105 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:04.230118 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:08:04.230139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:08:04.230583 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:08:04.238573 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 19:08:04.241032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:08:04.243455 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:08:04.249882 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:08:04.252752 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:08:04.270768 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:04.270825 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:04.270839 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:04.276740 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:04.276801 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:04.283568 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:04.296621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:08:04.305320 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:08:04.393616 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:08:04.401745 ignition[678]: Ignition 2.20.0 Jun 20 19:08:04.401755 ignition[678]: Stage: fetch-offline Jun 20 19:08:04.403785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:08:04.401796 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:04.405585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:08:04.401805 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:04.401978 ignition[678]: parsed url from cmdline: "" Jun 20 19:08:04.401981 ignition[678]: no config URL provided Jun 20 19:08:04.401986 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:08:04.401994 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:08:04.401999 ignition[678]: failed to fetch config: resource requires networking Jun 20 19:08:04.402205 ignition[678]: Ignition finished successfully Jun 20 19:08:04.437307 systemd-networkd[774]: lo: Link UP Jun 20 19:08:04.437318 systemd-networkd[774]: lo: Gained carrier Jun 20 19:08:04.439124 systemd-networkd[774]: Enumeration completed Jun 20 19:08:04.439726 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:08:04.440097 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:04.440101 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:04.440796 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:04.440799 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:04.441397 systemd-networkd[774]: eth0: Link UP Jun 20 19:08:04.441401 systemd-networkd[774]: eth0: Gained carrier Jun 20 19:08:04.441408 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:04.443465 systemd[1]: Reached target network.target - Network. Jun 20 19:08:04.448080 systemd-networkd[774]: eth1: Link UP Jun 20 19:08:04.448088 systemd-networkd[774]: eth1: Gained carrier Jun 20 19:08:04.448105 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:04.454770 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:08:04.470143 ignition[778]: Ignition 2.20.0 Jun 20 19:08:04.470154 ignition[778]: Stage: fetch Jun 20 19:08:04.470487 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:04.470499 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:04.470628 ignition[778]: parsed url from cmdline: "" Jun 20 19:08:04.472629 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:08:04.470632 ignition[778]: no config URL provided Jun 20 19:08:04.470637 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:08:04.470646 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:08:04.470732 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jun 20 19:08:04.471629 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 19:08:04.504639 systemd-networkd[774]: eth0: DHCPv4 address 168.119.177.47/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:08:04.671742 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jun 20 19:08:04.677917 ignition[778]: GET result: OK Jun 20 19:08:04.678027 ignition[778]: parsing config with SHA512: b4fea3907a726482d04067d2eeec60575f398dbb4ba8628a0001eab81650b2734e4564b1285525693ef17a8a2990874d3f2f3d43da902c6badf2617cbb77a50f Jun 20 19:08:04.684012 unknown[778]: fetched base config from "system" Jun 20 19:08:04.684024 unknown[778]: fetched base config from "system" Jun 20 19:08:04.684565 ignition[778]: fetch: fetch complete Jun 20 19:08:04.684031 unknown[778]: fetched user config from "hetzner" Jun 20 19:08:04.684572 ignition[778]: fetch: fetch passed Jun 20 19:08:04.687167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:08:04.684622 ignition[778]: Ignition finished successfully Jun 20 19:08:04.695482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:08:04.713755 ignition[785]: Ignition 2.20.0 Jun 20 19:08:04.713768 ignition[785]: Stage: kargs Jun 20 19:08:04.714075 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:04.714096 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:04.715146 ignition[785]: kargs: kargs passed Jun 20 19:08:04.715201 ignition[785]: Ignition finished successfully Jun 20 19:08:04.717014 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:08:04.722763 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:08:04.733440 ignition[792]: Ignition 2.20.0 Jun 20 19:08:04.733454 ignition[792]: Stage: disks Jun 20 19:08:04.733640 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:04.733651 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:04.734600 ignition[792]: disks: disks passed Jun 20 19:08:04.736627 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:08:04.734650 ignition[792]: Ignition finished successfully Jun 20 19:08:04.737587 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:08:04.738565 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:08:04.739620 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:08:04.740925 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:08:04.741952 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:08:04.748815 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:08:04.766362 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 19:08:04.769694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:08:04.777848 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:08:04.819918 kernel: EXT4-fs (sda9): mounted filesystem f172a629-efc5-4850-a631-f3c62b46134c r/w with ordered data mode. Quota mode: none. Jun 20 19:08:04.821173 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:08:04.822786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:08:04.834182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:08:04.838744 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:08:04.841827 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:08:04.848459 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:08:04.848503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:08:04.853571 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (808) Jun 20 19:08:04.855172 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:04.855212 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:04.855232 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:04.855570 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:08:04.857773 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:08:04.865963 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:04.866015 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:04.871670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:08:04.910323 coreos-metadata[810]: Jun 20 19:08:04.910 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jun 20 19:08:04.913230 coreos-metadata[810]: Jun 20 19:08:04.912 INFO Fetch successful Jun 20 19:08:04.915155 coreos-metadata[810]: Jun 20 19:08:04.915 INFO wrote hostname ci-4230-2-0-2-fda0fd8fee to /sysroot/etc/hostname Jun 20 19:08:04.915998 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:08:04.918519 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:08:04.926338 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:08:04.932620 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:08:04.938490 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:08:05.040872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:08:05.045744 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:08:05.051625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:08:05.054565 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:05.083561 ignition[925]: INFO : Ignition 2.20.0 Jun 20 19:08:05.083561 ignition[925]: INFO : Stage: mount Jun 20 19:08:05.083561 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.083561 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.083561 ignition[925]: INFO : mount: mount passed Jun 20 19:08:05.086253 ignition[925]: INFO : Ignition finished successfully Jun 20 19:08:05.088613 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:08:05.090413 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:08:05.097647 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:08:05.230253 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:08:05.237824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:08:05.250568 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (936) Jun 20 19:08:05.250649 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:05.250678 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:05.251422 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:05.255843 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:05.255927 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:05.259495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:08:05.280300 ignition[952]: INFO : Ignition 2.20.0 Jun 20 19:08:05.280300 ignition[952]: INFO : Stage: files Jun 20 19:08:05.282244 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.282244 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.282244 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:08:05.286173 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:08:05.286173 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:08:05.289778 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:08:05.291737 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:08:05.291737 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:08:05.290301 unknown[952]: wrote ssh authorized keys file for user: core Jun 20 19:08:05.294562 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 19:08:05.294562 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 19:08:05.423792 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:08:05.911631 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 19:08:05.913672 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:08:05.913672 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 19:08:06.282788 systemd-networkd[774]: eth1: Gained IPv6LL Jun 20 19:08:06.346796 systemd-networkd[774]: eth0: Gained IPv6LL Jun 20 19:08:06.496437 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:08:06.609619 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:08:06.609619 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:08:06.612929 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 19:08:07.151880 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:08:07.545951 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:08:07.545951 ignition[952]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:08:07.549197 ignition[952]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:08:07.549197 ignition[952]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:08:07.549197 ignition[952]: INFO : files: files passed Jun 20 19:08:07.549197 ignition[952]: INFO : Ignition finished successfully Jun 20 19:08:07.552441 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:08:07.561041 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:08:07.564908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:08:07.567892 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:08:07.567994 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:08:07.591097 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:07.591097 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:07.593967 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:07.596529 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:08:07.597577 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:08:07.605854 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:08:07.646796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:08:07.646987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:08:07.649072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:08:07.650174 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:08:07.651171 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:08:07.652643 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:08:07.671473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:08:07.676771 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:08:07.688715 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:07.689792 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:07.691930 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:08:07.694556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:08:07.694850 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:08:07.697104 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:08:07.698440 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:08:07.699413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:08:07.701459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:08:07.702445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:08:07.704511 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:08:07.706393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:08:07.708255 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:08:07.709401 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:08:07.710406 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:08:07.711301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:08:07.711425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:08:07.712695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:07.713364 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:07.714452 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:08:07.714534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:07.715629 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:08:07.715746 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:08:07.717253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:08:07.717379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:08:07.718834 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:08:07.718942 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:08:07.719809 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:08:07.719936 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:08:07.726783 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:08:07.730871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:08:07.731653 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:08:07.731839 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:07.733084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:08:07.733207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:08:07.738788 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:08:07.747262 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:08:07.749367 ignition[1005]: INFO : Ignition 2.20.0 Jun 20 19:08:07.749367 ignition[1005]: INFO : Stage: umount Jun 20 19:08:07.749367 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:07.749367 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:07.749367 ignition[1005]: INFO : umount: umount passed Jun 20 19:08:07.749367 ignition[1005]: INFO : Ignition finished successfully Jun 20 19:08:07.754102 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:08:07.754230 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:08:07.755398 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:08:07.755514 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:08:07.760841 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:08:07.760907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:08:07.762457 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:08:07.763444 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:08:07.764141 systemd[1]: Stopped target network.target - Network. Jun 20 19:08:07.765150 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:08:07.765209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:08:07.766612 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:08:07.767155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:08:07.771676 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:07.772376 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:08:07.773573 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:08:07.774487 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:08:07.774530 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:08:07.775945 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:08:07.776006 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:08:07.776735 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:08:07.776786 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:08:07.777641 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:08:07.777682 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:08:07.778668 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:08:07.779503 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:08:07.782064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:08:07.782611 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:08:07.782705 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:08:07.784180 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:08:07.784275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:08:07.786513 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:08:07.786650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:08:07.789719 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:08:07.790484 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:08:07.790971 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:08:07.794118 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:08:07.794720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:08:07.794776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:07.806750 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:08:07.807903 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:08:07.808004 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:08:07.812318 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:08:07.812388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:07.814067 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:08:07.814128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:07.815234 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:08:07.815285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:07.817238 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:07.822607 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:08:07.822712 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:08:07.830969 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:08:07.831737 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:08:07.839498 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:08:07.840581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:07.842985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:08:07.843045 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:07.843774 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:08:07.843808 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:07.845172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:08:07.845226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:08:07.847335 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:08:07.847392 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:08:07.848940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:08:07.848995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:07.855865 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:08:07.856633 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:08:07.856703 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:07.862709 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:08:07.862780 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:07.863486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:08:07.863533 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:07.864273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:07.864324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:07.866297 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:08:07.866368 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:08:07.866748 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:08:07.867129 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:08:07.873100 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:08:07.879788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:08:07.889889 systemd[1]: Switching root. Jun 20 19:08:07.924774 systemd-journald[238]: Journal stopped Jun 20 19:08:08.922249 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jun 20 19:08:08.922330 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:08:08.922345 kernel: SELinux: policy capability open_perms=1 Jun 20 19:08:08.922357 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:08:08.922374 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:08:08.922386 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:08:08.922397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:08:08.922687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:08:08.922706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:08:08.922718 kernel: audit: type=1403 audit(1750446488.086:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:08:08.922732 systemd[1]: Successfully loaded SELinux policy in 35.074ms. Jun 20 19:08:08.922756 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.733ms. Jun 20 19:08:08.922771 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:08:08.922785 systemd[1]: Detected virtualization kvm. Jun 20 19:08:08.922798 systemd[1]: Detected architecture arm64. Jun 20 19:08:08.922831 systemd[1]: Detected first boot. Jun 20 19:08:08.922848 systemd[1]: Hostname set to . Jun 20 19:08:08.922863 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:08:08.922876 zram_generator::config[1049]: No configuration found. Jun 20 19:08:08.922893 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:08:08.922905 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:08:08.922918 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:08:08.922931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:08:08.922944 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:08:08.922959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:08.922971 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:08:08.922984 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:08:08.922996 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:08:08.923009 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:08:08.923026 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:08:08.923038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:08:08.923051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:08:08.923064 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:08:08.923078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:08.923091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:08.923104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:08:08.923116 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:08:08.923134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:08:08.923148 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:08:08.923161 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 19:08:08.923176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:08.923189 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:08:08.923201 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:08:08.923214 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:08:08.923227 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:08:08.923240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:08.923252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:08:08.923265 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:08:08.923280 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:08:08.923293 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:08:08.923305 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:08:08.923318 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:08:08.923335 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:08.923350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:08.923365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:08.923378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:08:08.923391 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:08:08.923404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:08:08.923416 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:08:08.923430 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:08:08.923443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:08:08.923456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:08:08.923468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:08:08.923483 systemd[1]: Reached target machines.target - Containers. Jun 20 19:08:08.923496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:08:08.923509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:08.923522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:08:08.923535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:08:08.924506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:08.924532 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:08.924578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:08.924663 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:08:08.924678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:08.924691 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:08:08.924704 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:08:08.924717 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:08:08.924731 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:08:08.924743 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:08:08.924757 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:08.924772 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:08:08.924786 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:08:08.924799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:08:08.924825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:08:08.924840 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:08:08.924856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:08:08.924869 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:08:08.924882 systemd[1]: Stopped verity-setup.service. Jun 20 19:08:08.924905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:08:08.924921 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:08:08.924934 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:08:08.924950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:08:08.924963 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:08:08.924976 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:08:08.924989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:08.925002 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:08:08.925015 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:08:08.925028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:08.925040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:08.925056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:08.925069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:08.925082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:08.925095 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:08:08.925108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:08:08.925124 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:08:08.925138 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:08:08.925153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:08:08.925168 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:08:08.925214 systemd-journald[1118]: Collecting audit messages is disabled. Jun 20 19:08:08.925242 kernel: fuse: init (API version 7.39) Jun 20 19:08:08.925258 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:08:08.925271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:08.925285 systemd-journald[1118]: Journal started Jun 20 19:08:08.925313 systemd-journald[1118]: Runtime Journal (/run/log/journal/e18d9fc179c141dd855f11cc18e52835) is 8M, max 76.6M, 68.6M free. Jun 20 19:08:08.651390 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:08:08.929667 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:08:08.929691 kernel: loop: module loaded Jun 20 19:08:08.929704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:08.662737 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:08:08.663206 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:08:08.940157 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:08:08.943738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:08:08.955113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:08:08.963997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:08:08.966882 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:08:08.970661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:08:08.975190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:08:08.975791 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:08:08.984321 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:08.984498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:08.985784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:08:08.987009 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:08:08.988955 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:08:08.992572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:08:09.019922 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:08:09.022348 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:08:09.025682 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:08:09.038677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:08:09.043869 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:08:09.047780 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:08:09.049251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:09.051826 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:08:09.053588 kernel: ACPI: bus type drm_connector registered Jun 20 19:08:09.065783 kernel: loop0: detected capacity change from 0 to 123192 Jun 20 19:08:09.070130 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:09.070313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:09.096291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:09.100642 systemd-journald[1118]: Time spent on flushing to /var/log/journal/e18d9fc179c141dd855f11cc18e52835 is 43.642ms for 1148 entries. Jun 20 19:08:09.100642 systemd-journald[1118]: System Journal (/var/log/journal/e18d9fc179c141dd855f11cc18e52835) is 8M, max 584.8M, 576.8M free. Jun 20 19:08:09.159890 systemd-journald[1118]: Received client request to flush runtime journal. Jun 20 19:08:09.159940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:08:09.159955 kernel: loop1: detected capacity change from 0 to 211168 Jun 20 19:08:09.116329 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:08:09.117376 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jun 20 19:08:09.117386 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jun 20 19:08:09.135045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:09.146864 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:08:09.157632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:09.168727 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:08:09.172072 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:08:09.196778 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 19:08:09.204470 kernel: loop2: detected capacity change from 0 to 113512 Jun 20 19:08:09.228721 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:08:09.236168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:08:09.255512 kernel: loop3: detected capacity change from 0 to 8 Jun 20 19:08:09.254419 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jun 20 19:08:09.254442 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jun 20 19:08:09.259717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:09.284589 kernel: loop4: detected capacity change from 0 to 123192 Jun 20 19:08:09.301609 kernel: loop5: detected capacity change from 0 to 211168 Jun 20 19:08:09.333577 kernel: loop6: detected capacity change from 0 to 113512 Jun 20 19:08:09.347590 kernel: loop7: detected capacity change from 0 to 8 Jun 20 19:08:09.350751 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jun 20 19:08:09.352241 (sd-merge)[1199]: Merged extensions into '/usr'. Jun 20 19:08:09.358152 systemd[1]: Reload requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:08:09.358280 systemd[1]: Reloading... Jun 20 19:08:09.467469 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:08:09.486584 zram_generator::config[1227]: No configuration found. Jun 20 19:08:09.589791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:09.650726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:08:09.650854 systemd[1]: Reloading finished in 292 ms. Jun 20 19:08:09.669587 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:08:09.670634 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:08:09.681913 systemd[1]: Starting ensure-sysext.service... Jun 20 19:08:09.686758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:08:09.711249 systemd[1]: Reload requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:08:09.711262 systemd[1]: Reloading... Jun 20 19:08:09.726670 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:08:09.726932 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:08:09.727601 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:08:09.727845 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jun 20 19:08:09.727892 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jun 20 19:08:09.732994 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:09.733007 systemd-tmpfiles[1265]: Skipping /boot Jun 20 19:08:09.746423 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:09.746441 systemd-tmpfiles[1265]: Skipping /boot Jun 20 19:08:09.781569 zram_generator::config[1294]: No configuration found. Jun 20 19:08:09.877271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:09.938256 systemd[1]: Reloading finished in 226 ms. Jun 20 19:08:09.952648 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:08:09.963838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:09.976141 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:08:09.980303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:08:09.984639 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:08:09.987471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:08:09.994654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:09.998660 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:08:10.001367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.002858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:10.007789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:10.011357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:10.012729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.012889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.028647 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:08:10.034276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.034484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.034651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.040764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:08:10.042719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:10.043239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:10.044771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:10.045429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:10.049988 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:10.050587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:10.063824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.072758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:10.077882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:10.081897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:10.086838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:10.087755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.087892 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.090519 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:08:10.098593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:08:10.102912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:10.104576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:10.106006 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:10.106165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:10.110087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:10.111370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:10.113303 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Jun 20 19:08:10.121729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:10.122051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:10.127436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:10.128007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:10.135733 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:08:10.136446 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:10.136673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:08:10.139567 systemd[1]: Finished ensure-sysext.service. Jun 20 19:08:10.150258 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:08:10.156954 augenrules[1382]: No rules Jun 20 19:08:10.154401 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:08:10.155961 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:08:10.166264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:10.187777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:08:10.191878 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:08:10.286283 systemd-resolved[1337]: Positive Trust Anchors: Jun 20 19:08:10.288863 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:08:10.288973 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:08:10.291457 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:08:10.292285 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:08:10.296008 systemd-resolved[1337]: Using system hostname 'ci-4230-2-0-2-fda0fd8fee'. Jun 20 19:08:10.298007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:08:10.299096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:10.343443 systemd-networkd[1398]: lo: Link UP Jun 20 19:08:10.343451 systemd-networkd[1398]: lo: Gained carrier Jun 20 19:08:10.345318 systemd-networkd[1398]: Enumeration completed Jun 20 19:08:10.345441 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:08:10.346708 systemd[1]: Reached target network.target - Network. Jun 20 19:08:10.356969 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:08:10.362715 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:08:10.383960 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:08:10.385293 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 19:08:10.410577 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:08:10.422566 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1406) Jun 20 19:08:10.448220 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:10.448233 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:10.449376 systemd-networkd[1398]: eth1: Link UP Jun 20 19:08:10.449384 systemd-networkd[1398]: eth1: Gained carrier Jun 20 19:08:10.449404 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:10.459892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:08:10.466137 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:08:10.480715 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:08:10.484294 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:10.498310 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:08:10.507123 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:10.507136 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:10.507867 systemd-networkd[1398]: eth0: Link UP Jun 20 19:08:10.507875 systemd-networkd[1398]: eth0: Gained carrier Jun 20 19:08:10.507892 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:10.507913 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:10.512011 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:10.519640 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jun 20 19:08:10.519702 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:08:10.519714 kernel: [drm] features: -context_init Jun 20 19:08:10.520569 kernel: [drm] number of scanouts: 1 Jun 20 19:08:10.520611 kernel: [drm] number of cap sets: 0 Jun 20 19:08:10.521992 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jun 20 19:08:10.528174 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 19:08:10.529157 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jun 20 19:08:10.529403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.537268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:10.547056 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 19:08:10.552660 systemd-networkd[1398]: eth0: DHCPv4 address 168.119.177.47/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:08:10.553093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:10.553185 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:10.558555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:10.559194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.559230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.559252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:10.559639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:10.560700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:10.568184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:10.573003 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:10.577091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:10.578303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:10.579041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:10.580915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:10.604700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:10.670527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:10.739478 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:08:10.746805 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:08:10.758429 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:10.790986 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:08:10.793046 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:10.794664 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:08:10.795956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:08:10.796693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:08:10.797582 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:08:10.798283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:08:10.799014 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:08:10.799675 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:08:10.799705 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:08:10.800193 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:08:10.801420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:08:10.803599 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:08:10.806734 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:08:10.807608 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:08:10.808357 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:08:10.811239 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:08:10.812392 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:08:10.814614 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:08:10.816111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:08:10.816970 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:08:10.817594 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:08:10.818653 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:10.818689 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:10.821704 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:08:10.824428 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:10.826862 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:08:10.830749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:08:10.836083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:08:10.842903 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:08:10.843460 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:08:10.852742 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:08:10.856903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:08:10.859884 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jun 20 19:08:10.863848 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:08:10.866632 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:08:10.877533 jq[1464]: false Jun 20 19:08:10.888817 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:08:10.882603 dbus-daemon[1463]: [system] SELinux support is enabled Jun 20 19:08:10.891323 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:08:10.891923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:08:10.892649 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:08:10.896015 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:08:10.897374 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:08:10.902191 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:08:10.912168 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:08:10.912688 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:08:10.923066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:08:10.923315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:08:10.929431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:08:10.929464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:08:10.931371 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:08:10.931393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:08:10.942123 jq[1478]: true Jun 20 19:08:10.966472 extend-filesystems[1465]: Found loop4 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found loop5 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found loop6 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found loop7 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda1 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda2 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda3 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found usr Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda4 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda6 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda7 Jun 20 19:08:10.966472 extend-filesystems[1465]: Found sda9 Jun 20 19:08:10.964803 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:08:11.003911 coreos-metadata[1462]: Jun 20 19:08:10.989 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jun 20 19:08:11.003911 coreos-metadata[1462]: Jun 20 19:08:10.994 INFO Fetch successful Jun 20 19:08:11.003911 coreos-metadata[1462]: Jun 20 19:08:10.997 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jun 20 19:08:11.003911 coreos-metadata[1462]: Jun 20 19:08:11.000 INFO Fetch successful Jun 20 19:08:11.004149 update_engine[1477]: I20250620 19:08:10.971400 1477 main.cc:92] Flatcar Update Engine starting Jun 20 19:08:11.004149 update_engine[1477]: I20250620 19:08:10.983770 1477 update_check_scheduler.cc:74] Next update check in 7m13s Jun 20 19:08:11.004300 jq[1496]: true Jun 20 19:08:11.004391 extend-filesystems[1465]: Checking size of /dev/sda9 Jun 20 19:08:10.965039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:08:11.010995 tar[1491]: linux-arm64/LICENSE Jun 20 19:08:11.010995 tar[1491]: linux-arm64/helm Jun 20 19:08:10.978022 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:08:10.981242 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:08:10.996098 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:08:11.019080 extend-filesystems[1465]: Resized partition /dev/sda9 Jun 20 19:08:11.032593 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:08:11.057637 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jun 20 19:08:11.097236 systemd-logind[1473]: New seat seat0. Jun 20 19:08:11.116709 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 19:08:11.116793 systemd-logind[1473]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jun 20 19:08:11.117648 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:08:11.126568 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1394) Jun 20 19:08:11.152652 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:08:11.161624 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:11.165586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:08:11.170268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:08:11.181951 systemd[1]: Starting sshkeys.service... Jun 20 19:08:11.218286 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:08:11.229931 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:08:11.253575 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jun 20 19:08:11.280631 coreos-metadata[1540]: Jun 20 19:08:11.259 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jun 20 19:08:11.280631 coreos-metadata[1540]: Jun 20 19:08:11.263 INFO Fetch successful Jun 20 19:08:11.286230 extend-filesystems[1510]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:08:11.286230 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 5 Jun 20 19:08:11.286230 extend-filesystems[1510]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jun 20 19:08:11.300658 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Jun 20 19:08:11.300658 extend-filesystems[1465]: Found sr0 Jun 20 19:08:11.286842 unknown[1540]: wrote ssh authorized keys file for user: core Jun 20 19:08:11.288827 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:08:11.289036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:08:11.331843 update-ssh-keys[1544]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:11.332650 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:08:11.340071 systemd[1]: Finished sshkeys.service. Jun 20 19:08:11.385739 containerd[1493]: time="2025-06-20T19:08:11.384086000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:08:11.442629 containerd[1493]: time="2025-06-20T19:08:11.442577160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.445961360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446000560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446018240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446181800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446199000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446264640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446277200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446479720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446494640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446506760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:11.446859 containerd[1493]: time="2025-06-20T19:08:11.446516640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.447182 containerd[1493]: time="2025-06-20T19:08:11.446613160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.448435 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:08:11.448846 containerd[1493]: time="2025-06-20T19:08:11.448759760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:11.449004 containerd[1493]: time="2025-06-20T19:08:11.448979120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:11.449004 containerd[1493]: time="2025-06-20T19:08:11.449001360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:08:11.449195 containerd[1493]: time="2025-06-20T19:08:11.449094920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:08:11.449195 containerd[1493]: time="2025-06-20T19:08:11.449145880Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:08:11.454003 containerd[1493]: time="2025-06-20T19:08:11.453967280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:08:11.454151 containerd[1493]: time="2025-06-20T19:08:11.454029840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:08:11.454151 containerd[1493]: time="2025-06-20T19:08:11.454047080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:08:11.454151 containerd[1493]: time="2025-06-20T19:08:11.454063360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:08:11.454151 containerd[1493]: time="2025-06-20T19:08:11.454077320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:08:11.454349 containerd[1493]: time="2025-06-20T19:08:11.454233520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454491040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454618520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454645280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454660320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454675680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454689880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454704120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454718120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454734560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454747840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454761200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454814680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454843960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455569 containerd[1493]: time="2025-06-20T19:08:11.454859440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454873520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454888320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454906120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454922000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454933560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454949720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454963600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454977760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.454990480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455002200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455013720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455029720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455053720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455068480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.455870 containerd[1493]: time="2025-06-20T19:08:11.455080680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455264840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455283920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455294240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455305800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455314520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455328600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455339000Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:08:11.456104 containerd[1493]: time="2025-06-20T19:08:11.455348920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:08:11.459567 containerd[1493]: time="2025-06-20T19:08:11.458032720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:08:11.459567 containerd[1493]: time="2025-06-20T19:08:11.458390080Z" level=info msg="Connect containerd service" Jun 20 19:08:11.459567 containerd[1493]: time="2025-06-20T19:08:11.458459920Z" level=info msg="using legacy CRI server" Jun 20 19:08:11.459567 containerd[1493]: time="2025-06-20T19:08:11.458471000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:08:11.459567 containerd[1493]: time="2025-06-20T19:08:11.459345520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:08:11.463310 containerd[1493]: time="2025-06-20T19:08:11.463278240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:08:11.465030 containerd[1493]: time="2025-06-20T19:08:11.464988200Z" level=info msg="Start subscribing containerd event" Jun 20 19:08:11.465134 containerd[1493]: time="2025-06-20T19:08:11.465112440Z" level=info msg="Start recovering state" Jun 20 19:08:11.465213 containerd[1493]: time="2025-06-20T19:08:11.465186960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:08:11.465259 containerd[1493]: time="2025-06-20T19:08:11.465238440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:08:11.465326 containerd[1493]: time="2025-06-20T19:08:11.465311680Z" level=info msg="Start event monitor" Jun 20 19:08:11.465376 containerd[1493]: time="2025-06-20T19:08:11.465364600Z" level=info msg="Start snapshots syncer" Jun 20 19:08:11.465430 containerd[1493]: time="2025-06-20T19:08:11.465418720Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:08:11.465478 containerd[1493]: time="2025-06-20T19:08:11.465465000Z" level=info msg="Start streaming server" Jun 20 19:08:11.466768 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:08:11.468436 containerd[1493]: time="2025-06-20T19:08:11.467413160Z" level=info msg="containerd successfully booted in 0.084549s" Jun 20 19:08:11.699200 tar[1491]: linux-arm64/README.md Jun 20 19:08:11.713596 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:08:11.786761 systemd-networkd[1398]: eth1: Gained IPv6LL Jun 20 19:08:11.787393 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:11.792469 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:08:11.793996 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:08:11.803714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:11.807853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:08:11.845727 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:08:12.235447 systemd-networkd[1398]: eth0: Gained IPv6LL Jun 20 19:08:12.236019 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jun 20 19:08:12.661924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:12.663228 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:13.217096 kubelet[1576]: E0620 19:08:13.217032 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:13.221665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:13.222003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:13.222286 systemd[1]: kubelet.service: Consumed 946ms CPU time, 261.5M memory peak. Jun 20 19:08:13.600125 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:08:13.621283 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:08:13.628974 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:08:13.651779 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:08:13.652078 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:08:13.660037 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:08:13.672658 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:08:13.680186 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:08:13.683813 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 19:08:13.684692 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:08:13.685443 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:08:13.688893 systemd[1]: Startup finished in 775ms (kernel) + 6.402s (initrd) + 5.636s (userspace) = 12.814s. Jun 20 19:08:23.472814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:23.486875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:23.599609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:23.604933 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:23.656606 kubelet[1612]: E0620 19:08:23.656488 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:23.660236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:23.660450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:23.660985 systemd[1]: kubelet.service: Consumed 157ms CPU time, 105.4M memory peak. Jun 20 19:08:33.911618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:08:33.918853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:34.031730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.044173 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:34.098636 kubelet[1627]: E0620 19:08:34.098571 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:34.103014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:34.103698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:34.104173 systemd[1]: kubelet.service: Consumed 157ms CPU time, 105.2M memory peak. Jun 20 19:08:42.690405 systemd-timesyncd[1383]: Contacted time server 178.63.67.56:123 (2.flatcar.pool.ntp.org). Jun 20 19:08:42.690487 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-06-20 19:08:43.038293 UTC. Jun 20 19:08:44.355517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:08:44.364964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:44.476124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:44.485274 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:44.537527 kubelet[1642]: E0620 19:08:44.537382 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:44.543256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:44.543633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:44.545764 systemd[1]: kubelet.service: Consumed 149ms CPU time, 105.1M memory peak. Jun 20 19:08:54.793708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:08:54.798816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:54.928228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:54.932856 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:54.981324 kubelet[1658]: E0620 19:08:54.981260 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:54.984200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:54.984367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:54.985022 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.6M memory peak. Jun 20 19:08:55.920109 update_engine[1477]: I20250620 19:08:55.919960 1477 update_attempter.cc:509] Updating boot flags... Jun 20 19:08:55.967614 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1674) Jun 20 19:09:05.151752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:09:05.166700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:05.289312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:05.299082 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:05.342748 kubelet[1688]: E0620 19:09:05.342652 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:05.347523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:05.347916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:05.348351 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107M memory peak. Jun 20 19:09:15.401614 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 19:09:15.415596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:15.528832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:15.534981 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:15.583162 kubelet[1703]: E0620 19:09:15.583103 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:15.586171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:15.586343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:15.586891 systemd[1]: kubelet.service: Consumed 151ms CPU time, 107M memory peak. Jun 20 19:09:25.651759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 20 19:09:25.664009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:25.774419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:25.778421 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:25.822634 kubelet[1717]: E0620 19:09:25.822589 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:25.825646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:25.826026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:25.826416 systemd[1]: kubelet.service: Consumed 146ms CPU time, 105.1M memory peak. Jun 20 19:09:35.901777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 20 19:09:35.918579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:36.044794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:36.047138 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:36.092904 kubelet[1734]: E0620 19:09:36.092807 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:36.098049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:36.098332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:36.099014 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.6M memory peak. Jun 20 19:09:46.151261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jun 20 19:09:46.163861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:46.292721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:46.306446 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:46.354671 kubelet[1748]: E0620 19:09:46.354601 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:46.357855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:46.358077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:46.358536 systemd[1]: kubelet.service: Consumed 153ms CPU time, 104.6M memory peak. Jun 20 19:09:56.401645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jun 20 19:09:56.410922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:56.542530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:56.547104 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:56.585802 kubelet[1765]: E0620 19:09:56.585741 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:56.589094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:56.589657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:56.591749 systemd[1]: kubelet.service: Consumed 149ms CPU time, 105M memory peak. Jun 20 19:09:56.846955 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:09:56.851890 systemd[1]: Started sshd@0-168.119.177.47:22-147.75.109.163:42340.service - OpenSSH per-connection server daemon (147.75.109.163:42340). Jun 20 19:09:57.870956 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 42340 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:09:57.874785 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:57.890015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:09:57.901280 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:09:57.904450 systemd-logind[1473]: New session 1 of user core. Jun 20 19:09:57.914652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:09:57.923303 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:09:57.927213 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:09:57.930490 systemd-logind[1473]: New session c1 of user core. Jun 20 19:09:58.063895 systemd[1777]: Queued start job for default target default.target. Jun 20 19:09:58.076291 systemd[1777]: Created slice app.slice - User Application Slice. Jun 20 19:09:58.076347 systemd[1777]: Reached target paths.target - Paths. Jun 20 19:09:58.076430 systemd[1777]: Reached target timers.target - Timers. Jun 20 19:09:58.078642 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:09:58.092076 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:09:58.092271 systemd[1777]: Reached target sockets.target - Sockets. Jun 20 19:09:58.092359 systemd[1777]: Reached target basic.target - Basic System. Jun 20 19:09:58.092464 systemd[1777]: Reached target default.target - Main User Target. Jun 20 19:09:58.092517 systemd[1777]: Startup finished in 154ms. Jun 20 19:09:58.092936 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:09:58.101871 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:09:58.810422 systemd[1]: Started sshd@1-168.119.177.47:22-147.75.109.163:42342.service - OpenSSH per-connection server daemon (147.75.109.163:42342). Jun 20 19:09:59.824993 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 42342 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:09:59.827337 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:59.833027 systemd-logind[1473]: New session 2 of user core. Jun 20 19:09:59.840811 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:10:00.522300 sshd[1790]: Connection closed by 147.75.109.163 port 42342 Jun 20 19:10:00.523090 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:00.529284 systemd[1]: sshd@1-168.119.177.47:22-147.75.109.163:42342.service: Deactivated successfully. Jun 20 19:10:00.531725 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:10:00.532597 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:10:00.534045 systemd-logind[1473]: Removed session 2. Jun 20 19:10:00.709053 systemd[1]: Started sshd@2-168.119.177.47:22-147.75.109.163:42348.service - OpenSSH per-connection server daemon (147.75.109.163:42348). Jun 20 19:10:01.718176 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 42348 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:01.720323 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:01.726861 systemd-logind[1473]: New session 3 of user core. Jun 20 19:10:01.736870 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:10:02.410588 sshd[1798]: Connection closed by 147.75.109.163 port 42348 Jun 20 19:10:02.410441 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:02.414958 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:10:02.416321 systemd[1]: sshd@2-168.119.177.47:22-147.75.109.163:42348.service: Deactivated successfully. Jun 20 19:10:02.418594 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:10:02.419890 systemd-logind[1473]: Removed session 3. Jun 20 19:10:02.590059 systemd[1]: Started sshd@3-168.119.177.47:22-147.75.109.163:42354.service - OpenSSH per-connection server daemon (147.75.109.163:42354). Jun 20 19:10:03.587780 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 42354 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:03.590163 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:03.595888 systemd-logind[1473]: New session 4 of user core. Jun 20 19:10:03.605873 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:10:04.277004 sshd[1806]: Connection closed by 147.75.109.163 port 42354 Jun 20 19:10:04.278087 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:04.283168 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:10:04.283912 systemd[1]: sshd@3-168.119.177.47:22-147.75.109.163:42354.service: Deactivated successfully. Jun 20 19:10:04.286155 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:10:04.287500 systemd-logind[1473]: Removed session 4. Jun 20 19:10:04.459052 systemd[1]: Started sshd@4-168.119.177.47:22-147.75.109.163:42360.service - OpenSSH per-connection server daemon (147.75.109.163:42360). Jun 20 19:10:05.462349 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 42360 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:05.464424 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:05.470845 systemd-logind[1473]: New session 5 of user core. Jun 20 19:10:05.476860 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:10:06.003366 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:10:06.003757 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:06.020583 sudo[1815]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:06.184516 sshd[1814]: Connection closed by 147.75.109.163 port 42360 Jun 20 19:10:06.183400 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:06.188328 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:10:06.189162 systemd[1]: sshd@4-168.119.177.47:22-147.75.109.163:42360.service: Deactivated successfully. Jun 20 19:10:06.191846 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:10:06.193151 systemd-logind[1473]: Removed session 5. Jun 20 19:10:06.361719 systemd[1]: Started sshd@5-168.119.177.47:22-147.75.109.163:37050.service - OpenSSH per-connection server daemon (147.75.109.163:37050). Jun 20 19:10:06.651466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jun 20 19:10:06.659893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:06.783116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:06.794447 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:06.844750 kubelet[1831]: E0620 19:10:06.844671 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:06.847911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:06.848129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:06.848696 systemd[1]: kubelet.service: Consumed 154ms CPU time, 104.9M memory peak. Jun 20 19:10:07.352443 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 37050 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:07.354919 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:07.361027 systemd-logind[1473]: New session 6 of user core. Jun 20 19:10:07.374406 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:10:07.878175 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:10:07.879151 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:07.884133 sudo[1840]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:07.893832 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:10:07.894416 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:07.917232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:10:07.956593 augenrules[1862]: No rules Jun 20 19:10:07.958459 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:10:07.959643 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:10:07.961847 sudo[1839]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:08.122465 sshd[1838]: Connection closed by 147.75.109.163 port 37050 Jun 20 19:10:08.123438 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:08.130016 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:10:08.131490 systemd[1]: sshd@5-168.119.177.47:22-147.75.109.163:37050.service: Deactivated successfully. Jun 20 19:10:08.133532 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:10:08.136322 systemd-logind[1473]: Removed session 6. Jun 20 19:10:08.296991 systemd[1]: Started sshd@6-168.119.177.47:22-147.75.109.163:37066.service - OpenSSH per-connection server daemon (147.75.109.163:37066). Jun 20 19:10:09.293685 sshd[1871]: Accepted publickey for core from 147.75.109.163 port 37066 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:09.295426 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:09.301699 systemd-logind[1473]: New session 7 of user core. Jun 20 19:10:09.310832 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:10:09.817276 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:10:09.817590 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:10.152168 (dockerd)[1891]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:10:10.152170 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:10:10.392273 dockerd[1891]: time="2025-06-20T19:10:10.392189118Z" level=info msg="Starting up" Jun 20 19:10:10.499302 dockerd[1891]: time="2025-06-20T19:10:10.499137951Z" level=info msg="Loading containers: start." Jun 20 19:10:10.657653 kernel: Initializing XFRM netlink socket Jun 20 19:10:10.746533 systemd-networkd[1398]: docker0: Link UP Jun 20 19:10:10.769672 dockerd[1891]: time="2025-06-20T19:10:10.768910247Z" level=info msg="Loading containers: done." Jun 20 19:10:10.786692 dockerd[1891]: time="2025-06-20T19:10:10.786628425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:10:10.786861 dockerd[1891]: time="2025-06-20T19:10:10.786769963Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:10:10.787024 dockerd[1891]: time="2025-06-20T19:10:10.786988070Z" level=info msg="Daemon has completed initialization" Jun 20 19:10:10.826872 dockerd[1891]: time="2025-06-20T19:10:10.826074636Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:10:10.826173 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:10:11.619524 containerd[1493]: time="2025-06-20T19:10:11.619237409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:10:12.263254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022713384.mount: Deactivated successfully. Jun 20 19:10:13.868561 containerd[1493]: time="2025-06-20T19:10:13.868276974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.870209 containerd[1493]: time="2025-06-20T19:10:13.870128114Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351808" Jun 20 19:10:13.871457 containerd[1493]: time="2025-06-20T19:10:13.871377622Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.874421 containerd[1493]: time="2025-06-20T19:10:13.874360337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.875883 containerd[1493]: time="2025-06-20T19:10:13.875526996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.256239301s" Jun 20 19:10:13.875883 containerd[1493]: time="2025-06-20T19:10:13.875593844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 19:10:13.878235 containerd[1493]: time="2025-06-20T19:10:13.878093501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:10:15.548105 containerd[1493]: time="2025-06-20T19:10:15.547899365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.550723 containerd[1493]: time="2025-06-20T19:10:15.550629123Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537643" Jun 20 19:10:15.551792 containerd[1493]: time="2025-06-20T19:10:15.551681086Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.560171 containerd[1493]: time="2025-06-20T19:10:15.560029859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.563736 containerd[1493]: time="2025-06-20T19:10:15.563481862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.685314673s" Jun 20 19:10:15.563736 containerd[1493]: time="2025-06-20T19:10:15.563658963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 19:10:15.565942 containerd[1493]: time="2025-06-20T19:10:15.565717083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:10:16.901140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jun 20 19:10:16.909672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:17.033291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:17.045930 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:17.103952 kubelet[2148]: E0620 19:10:17.103637 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:17.106329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:17.106477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:17.106870 systemd[1]: kubelet.service: Consumed 150ms CPU time, 106.7M memory peak. Jun 20 19:10:17.205000 containerd[1493]: time="2025-06-20T19:10:17.204676368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:17.206989 containerd[1493]: time="2025-06-20T19:10:17.206876140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293535" Jun 20 19:10:17.208090 containerd[1493]: time="2025-06-20T19:10:17.207632026Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:17.212052 containerd[1493]: time="2025-06-20T19:10:17.211964003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:17.214113 containerd[1493]: time="2025-06-20T19:10:17.213904065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.648142978s" Jun 20 19:10:17.214113 containerd[1493]: time="2025-06-20T19:10:17.213956671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 19:10:17.215512 containerd[1493]: time="2025-06-20T19:10:17.215473685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:10:18.300134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540754270.mount: Deactivated successfully. Jun 20 19:10:18.718404 containerd[1493]: time="2025-06-20T19:10:18.718274805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.719982 containerd[1493]: time="2025-06-20T19:10:18.719873026Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199498" Jun 20 19:10:18.721006 containerd[1493]: time="2025-06-20T19:10:18.720920746Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.724287 containerd[1493]: time="2025-06-20T19:10:18.724176716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.725315 containerd[1493]: time="2025-06-20T19:10:18.725155267Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.50853393s" Jun 20 19:10:18.725315 containerd[1493]: time="2025-06-20T19:10:18.725204433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 19:10:18.726091 containerd[1493]: time="2025-06-20T19:10:18.725897271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:10:19.282669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701030899.mount: Deactivated successfully. Jun 20 19:10:20.378964 containerd[1493]: time="2025-06-20T19:10:20.378847583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.380651 containerd[1493]: time="2025-06-20T19:10:20.380578780Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Jun 20 19:10:20.382509 containerd[1493]: time="2025-06-20T19:10:20.382445284Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.386559 containerd[1493]: time="2025-06-20T19:10:20.386470185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.389606 containerd[1493]: time="2025-06-20T19:10:20.388935633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.662994s" Jun 20 19:10:20.389606 containerd[1493]: time="2025-06-20T19:10:20.388993788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 19:10:20.390037 containerd[1493]: time="2025-06-20T19:10:20.390004013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:10:20.962629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197263024.mount: Deactivated successfully. Jun 20 19:10:20.970445 containerd[1493]: time="2025-06-20T19:10:20.970074810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.971488 containerd[1493]: time="2025-06-20T19:10:20.971415564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jun 20 19:10:20.973069 containerd[1493]: time="2025-06-20T19:10:20.972990776Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.976138 containerd[1493]: time="2025-06-20T19:10:20.976093164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:20.978599 containerd[1493]: time="2025-06-20T19:10:20.977189461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 587.055901ms" Jun 20 19:10:20.978599 containerd[1493]: time="2025-06-20T19:10:20.977228697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 19:10:20.979222 containerd[1493]: time="2025-06-20T19:10:20.979171514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:10:21.526978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971713372.mount: Deactivated successfully. Jun 20 19:10:23.770418 containerd[1493]: time="2025-06-20T19:10:23.769097298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:23.771016 containerd[1493]: time="2025-06-20T19:10:23.770833362Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334637" Jun 20 19:10:23.772176 containerd[1493]: time="2025-06-20T19:10:23.772127861Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:23.777345 containerd[1493]: time="2025-06-20T19:10:23.777293855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:23.779413 containerd[1493]: time="2025-06-20T19:10:23.779365453Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.800139664s" Jun 20 19:10:23.779652 containerd[1493]: time="2025-06-20T19:10:23.779622593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 19:10:27.151814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jun 20 19:10:27.164817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:27.313753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:27.314607 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:27.392551 kubelet[2304]: E0620 19:10:27.392446 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:27.398343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:27.398599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:27.400656 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.4M memory peak. Jun 20 19:10:29.348100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:29.348704 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.4M memory peak. Jun 20 19:10:29.358152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:29.394940 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit session-7.scope)... Jun 20 19:10:29.394956 systemd[1]: Reloading... Jun 20 19:10:29.518578 zram_generator::config[2366]: No configuration found. Jun 20 19:10:29.625401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:29.717845 systemd[1]: Reloading finished in 322 ms. Jun 20 19:10:29.780666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:29.787736 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:29.788604 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:10:29.788986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:29.789033 systemd[1]: kubelet.service: Consumed 108ms CPU time, 94.9M memory peak. Jun 20 19:10:29.795079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:29.936450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:29.954297 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:10:30.003324 kubelet[2414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:30.003682 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:10:30.003727 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:30.003874 kubelet[2414]: I0620 19:10:30.003841 2414 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:10:31.451584 kubelet[2414]: I0620 19:10:31.451461 2414 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:10:31.451584 kubelet[2414]: I0620 19:10:31.451500 2414 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:10:31.452483 kubelet[2414]: I0620 19:10:31.452063 2414 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:10:31.491591 kubelet[2414]: E0620 19:10:31.491167 2414 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.177.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:10:31.492941 kubelet[2414]: I0620 19:10:31.492763 2414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:10:31.506363 kubelet[2414]: E0620 19:10:31.506275 2414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:10:31.506363 kubelet[2414]: I0620 19:10:31.506327 2414 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:10:31.509180 kubelet[2414]: I0620 19:10:31.509126 2414 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:10:31.510788 kubelet[2414]: I0620 19:10:31.510725 2414 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:10:31.510976 kubelet[2414]: I0620 19:10:31.510781 2414 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-2-fda0fd8fee","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:10:31.511116 kubelet[2414]: I0620 19:10:31.511044 2414 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:10:31.511116 kubelet[2414]: I0620 19:10:31.511057 2414 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:10:31.511312 kubelet[2414]: I0620 19:10:31.511273 2414 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:31.516260 kubelet[2414]: I0620 19:10:31.516050 2414 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:10:31.516260 kubelet[2414]: I0620 19:10:31.516094 2414 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:10:31.516260 kubelet[2414]: I0620 19:10:31.516124 2414 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:10:31.516260 kubelet[2414]: I0620 19:10:31.516141 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:10:31.519994 kubelet[2414]: E0620 19:10:31.519963 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.177.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-2-fda0fd8fee&limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:10:31.520631 kubelet[2414]: E0620 19:10:31.520603 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.177.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:10:31.520907 kubelet[2414]: I0620 19:10:31.520890 2414 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:10:31.521761 kubelet[2414]: I0620 19:10:31.521742 2414 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:10:31.521969 kubelet[2414]: W0620 19:10:31.521956 2414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:10:31.526156 kubelet[2414]: I0620 19:10:31.525904 2414 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:10:31.526156 kubelet[2414]: I0620 19:10:31.525960 2414 server.go:1289] "Started kubelet" Jun 20 19:10:31.530776 kubelet[2414]: I0620 19:10:31.530739 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:10:31.532325 kubelet[2414]: E0620 19:10:31.530310 2414 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.177.47:6443/api/v1/namespaces/default/events\": dial tcp 168.119.177.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-2-fda0fd8fee.184ad5efe661be7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-2-fda0fd8fee,UID:ci-4230-2-0-2-fda0fd8fee,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-2-fda0fd8fee,},FirstTimestamp:2025-06-20 19:10:31.525924477 +0000 UTC m=+1.565625060,LastTimestamp:2025-06-20 19:10:31.525924477 +0000 UTC m=+1.565625060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-2-fda0fd8fee,}" Jun 20 19:10:31.539796 kubelet[2414]: I0620 19:10:31.538413 2414 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:10:31.539796 kubelet[2414]: I0620 19:10:31.539650 2414 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:10:31.543455 kubelet[2414]: I0620 19:10:31.543375 2414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:10:31.543735 kubelet[2414]: I0620 19:10:31.543706 2414 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:10:31.544610 kubelet[2414]: I0620 19:10:31.544146 2414 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:10:31.544610 kubelet[2414]: E0620 19:10:31.544374 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:31.544710 kubelet[2414]: I0620 19:10:31.543887 2414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:10:31.546077 kubelet[2414]: I0620 19:10:31.546046 2414 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:10:31.546179 kubelet[2414]: I0620 19:10:31.546157 2414 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:10:31.546858 kubelet[2414]: E0620 19:10:31.546818 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.177.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-2-fda0fd8fee?timeout=10s\": dial tcp 168.119.177.47:6443: connect: connection refused" interval="200ms" Jun 20 19:10:31.547118 kubelet[2414]: E0620 19:10:31.547086 2414 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:10:31.548618 kubelet[2414]: I0620 19:10:31.548593 2414 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:10:31.548919 kubelet[2414]: I0620 19:10:31.548605 2414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:10:31.548919 kubelet[2414]: I0620 19:10:31.548763 2414 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:10:31.563830 kubelet[2414]: I0620 19:10:31.563755 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:10:31.565362 kubelet[2414]: I0620 19:10:31.565265 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:10:31.565362 kubelet[2414]: I0620 19:10:31.565349 2414 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:10:31.565477 kubelet[2414]: I0620 19:10:31.565370 2414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:10:31.565477 kubelet[2414]: I0620 19:10:31.565378 2414 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:10:31.565477 kubelet[2414]: E0620 19:10:31.565420 2414 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:10:31.570621 kubelet[2414]: E0620 19:10:31.570574 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.177.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:10:31.574521 kubelet[2414]: E0620 19:10:31.574192 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.177.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:10:31.581187 kubelet[2414]: I0620 19:10:31.581138 2414 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:10:31.581187 kubelet[2414]: I0620 19:10:31.581177 2414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:10:31.581312 kubelet[2414]: I0620 19:10:31.581210 2414 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:31.584026 kubelet[2414]: I0620 19:10:31.583969 2414 policy_none.go:49] "None policy: Start" Jun 20 19:10:31.584026 kubelet[2414]: I0620 19:10:31.583999 2414 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:10:31.584026 kubelet[2414]: I0620 19:10:31.584024 2414 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:10:31.591587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:10:31.610509 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:10:31.623743 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:10:31.626211 kubelet[2414]: E0620 19:10:31.625636 2414 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:10:31.626211 kubelet[2414]: I0620 19:10:31.625939 2414 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:10:31.626211 kubelet[2414]: I0620 19:10:31.625959 2414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:10:31.627761 kubelet[2414]: I0620 19:10:31.627433 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:10:31.628956 kubelet[2414]: E0620 19:10:31.628934 2414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:10:31.629245 kubelet[2414]: E0620 19:10:31.629195 2414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:31.683474 systemd[1]: Created slice kubepods-burstable-pode561535f9f940b17ad94415027903bc7.slice - libcontainer container kubepods-burstable-pode561535f9f940b17ad94415027903bc7.slice. Jun 20 19:10:31.692848 kubelet[2414]: E0620 19:10:31.692805 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.701253 systemd[1]: Created slice kubepods-burstable-podef3303e30b6b25231fb1e6d2d54e3f61.slice - libcontainer container kubepods-burstable-podef3303e30b6b25231fb1e6d2d54e3f61.slice. Jun 20 19:10:31.715202 kubelet[2414]: E0620 19:10:31.714113 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.719801 systemd[1]: Created slice kubepods-burstable-pod5327dda4a910b0722f4c63d9901b16e1.slice - libcontainer container kubepods-burstable-pod5327dda4a910b0722f4c63d9901b16e1.slice. Jun 20 19:10:31.721632 kubelet[2414]: E0620 19:10:31.721596 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.729092 kubelet[2414]: I0620 19:10:31.728959 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.729627 kubelet[2414]: E0620 19:10:31.729587 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.177.47:6443/api/v1/nodes\": dial tcp 168.119.177.47:6443: connect: connection refused" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.747678 kubelet[2414]: E0620 19:10:31.747631 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.177.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-2-fda0fd8fee?timeout=10s\": dial tcp 168.119.177.47:6443: connect: connection refused" interval="400ms" Jun 20 19:10:31.849716 kubelet[2414]: I0620 19:10:31.849369 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.849716 kubelet[2414]: I0620 19:10:31.849719 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850175 kubelet[2414]: I0620 19:10:31.849759 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850175 kubelet[2414]: I0620 19:10:31.849814 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850175 kubelet[2414]: I0620 19:10:31.849843 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850175 kubelet[2414]: I0620 19:10:31.849879 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850175 kubelet[2414]: I0620 19:10:31.849910 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850381 kubelet[2414]: I0620 19:10:31.849952 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.850381 kubelet[2414]: I0620 19:10:31.849982 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5327dda4a910b0722f4c63d9901b16e1-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-2-fda0fd8fee\" (UID: \"5327dda4a910b0722f4c63d9901b16e1\") " pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.932510 kubelet[2414]: I0620 19:10:31.932412 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.933580 kubelet[2414]: E0620 19:10:31.933400 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.177.47:6443/api/v1/nodes\": dial tcp 168.119.177.47:6443: connect: connection refused" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:31.995760 containerd[1493]: time="2025-06-20T19:10:31.995593960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-2-fda0fd8fee,Uid:e561535f9f940b17ad94415027903bc7,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:32.016156 containerd[1493]: time="2025-06-20T19:10:32.016058569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-2-fda0fd8fee,Uid:ef3303e30b6b25231fb1e6d2d54e3f61,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:32.025191 containerd[1493]: time="2025-06-20T19:10:32.025025136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-2-fda0fd8fee,Uid:5327dda4a910b0722f4c63d9901b16e1,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:32.149400 kubelet[2414]: E0620 19:10:32.149249 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.177.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-2-fda0fd8fee?timeout=10s\": dial tcp 168.119.177.47:6443: connect: connection refused" interval="800ms" Jun 20 19:10:32.337149 kubelet[2414]: I0620 19:10:32.336129 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:32.337149 kubelet[2414]: E0620 19:10:32.336597 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.177.47:6443/api/v1/nodes\": dial tcp 168.119.177.47:6443: connect: connection refused" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:32.549068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516583436.mount: Deactivated successfully. Jun 20 19:10:32.557367 containerd[1493]: time="2025-06-20T19:10:32.557262896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:32.559716 containerd[1493]: time="2025-06-20T19:10:32.559632043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:32.561416 containerd[1493]: time="2025-06-20T19:10:32.561290457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jun 20 19:10:32.562463 containerd[1493]: time="2025-06-20T19:10:32.562405173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:10:32.564903 containerd[1493]: time="2025-06-20T19:10:32.564819238Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:32.565804 kubelet[2414]: E0620 19:10:32.565730 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.177.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:10:32.568100 containerd[1493]: time="2025-06-20T19:10:32.567868598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:10:32.568100 containerd[1493]: time="2025-06-20T19:10:32.568013472Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:32.570628 containerd[1493]: time="2025-06-20T19:10:32.570502454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:32.573519 containerd[1493]: time="2025-06-20T19:10:32.572710287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.533917ms" Jun 20 19:10:32.576905 containerd[1493]: time="2025-06-20T19:10:32.576757887Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 581.049572ms" Jun 20 19:10:32.602378 containerd[1493]: time="2025-06-20T19:10:32.601887575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.72801ms" Jun 20 19:10:32.676627 containerd[1493]: time="2025-06-20T19:10:32.676489432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:32.676766 containerd[1493]: time="2025-06-20T19:10:32.676664145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:32.676835 containerd[1493]: time="2025-06-20T19:10:32.676802980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.677608 containerd[1493]: time="2025-06-20T19:10:32.677513551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.680913 containerd[1493]: time="2025-06-20T19:10:32.680816581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:32.681194 containerd[1493]: time="2025-06-20T19:10:32.680880179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:32.681442 containerd[1493]: time="2025-06-20T19:10:32.681264204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.681442 containerd[1493]: time="2025-06-20T19:10:32.681391638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.686403 containerd[1493]: time="2025-06-20T19:10:32.686309524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:32.686762 containerd[1493]: time="2025-06-20T19:10:32.686368482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:32.686896 containerd[1493]: time="2025-06-20T19:10:32.686853343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.689080 containerd[1493]: time="2025-06-20T19:10:32.688992859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:32.708819 systemd[1]: Started cri-containerd-6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f.scope - libcontainer container 6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f. Jun 20 19:10:32.710971 kubelet[2414]: E0620 19:10:32.710496 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.177.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:10:32.711457 systemd[1]: Started cri-containerd-7cd0fea63870cd4aecc6a2e2cd5bc30aa33e19d2e7ec0ec17eb13ad5c26d6e5d.scope - libcontainer container 7cd0fea63870cd4aecc6a2e2cd5bc30aa33e19d2e7ec0ec17eb13ad5c26d6e5d. Jun 20 19:10:32.717753 systemd[1]: Started cri-containerd-716b071c60e754900b97a8a2a7cb26fc46a584af226a5603660f7d8e17dac492.scope - libcontainer container 716b071c60e754900b97a8a2a7cb26fc46a584af226a5603660f7d8e17dac492. Jun 20 19:10:32.759246 containerd[1493]: time="2025-06-20T19:10:32.758349122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-2-fda0fd8fee,Uid:ef3303e30b6b25231fb1e6d2d54e3f61,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f\"" Jun 20 19:10:32.772174 containerd[1493]: time="2025-06-20T19:10:32.772115139Z" level=info msg="CreateContainer within sandbox \"6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:10:32.779574 containerd[1493]: time="2025-06-20T19:10:32.778997507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-2-fda0fd8fee,Uid:e561535f9f940b17ad94415027903bc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"716b071c60e754900b97a8a2a7cb26fc46a584af226a5603660f7d8e17dac492\"" Jun 20 19:10:32.791243 containerd[1493]: time="2025-06-20T19:10:32.791188866Z" level=info msg="CreateContainer within sandbox \"716b071c60e754900b97a8a2a7cb26fc46a584af226a5603660f7d8e17dac492\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:10:32.802108 containerd[1493]: time="2025-06-20T19:10:32.802072637Z" level=info msg="CreateContainer within sandbox \"6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a\"" Jun 20 19:10:32.804244 containerd[1493]: time="2025-06-20T19:10:32.804039399Z" level=info msg="StartContainer for \"86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a\"" Jun 20 19:10:32.808341 containerd[1493]: time="2025-06-20T19:10:32.808221114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-2-fda0fd8fee,Uid:5327dda4a910b0722f4c63d9901b16e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd0fea63870cd4aecc6a2e2cd5bc30aa33e19d2e7ec0ec17eb13ad5c26d6e5d\"" Jun 20 19:10:32.815930 containerd[1493]: time="2025-06-20T19:10:32.815829614Z" level=info msg="CreateContainer within sandbox \"7cd0fea63870cd4aecc6a2e2cd5bc30aa33e19d2e7ec0ec17eb13ad5c26d6e5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:10:32.830367 containerd[1493]: time="2025-06-20T19:10:32.829785344Z" level=info msg="CreateContainer within sandbox \"716b071c60e754900b97a8a2a7cb26fc46a584af226a5603660f7d8e17dac492\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a74d0117eb04274ee76f4210e2eede9cfa93a18dd95c43cfa6ef48279042069\"" Jun 20 19:10:32.831606 containerd[1493]: time="2025-06-20T19:10:32.831002456Z" level=info msg="StartContainer for \"0a74d0117eb04274ee76f4210e2eede9cfa93a18dd95c43cfa6ef48279042069\"" Jun 20 19:10:32.840709 containerd[1493]: time="2025-06-20T19:10:32.840670394Z" level=info msg="CreateContainer within sandbox \"7cd0fea63870cd4aecc6a2e2cd5bc30aa33e19d2e7ec0ec17eb13ad5c26d6e5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634\"" Jun 20 19:10:32.841416 containerd[1493]: time="2025-06-20T19:10:32.841385886Z" level=info msg="StartContainer for \"401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634\"" Jun 20 19:10:32.841747 systemd[1]: Started cri-containerd-86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a.scope - libcontainer container 86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a. Jun 20 19:10:32.853875 kubelet[2414]: E0620 19:10:32.853729 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.177.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-2-fda0fd8fee&limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:10:32.883016 systemd[1]: Started cri-containerd-0a74d0117eb04274ee76f4210e2eede9cfa93a18dd95c43cfa6ef48279042069.scope - libcontainer container 0a74d0117eb04274ee76f4210e2eede9cfa93a18dd95c43cfa6ef48279042069. Jun 20 19:10:32.901725 systemd[1]: Started cri-containerd-401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634.scope - libcontainer container 401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634. Jun 20 19:10:32.905680 containerd[1493]: time="2025-06-20T19:10:32.905634351Z" level=info msg="StartContainer for \"86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a\" returns successfully" Jun 20 19:10:32.951619 kubelet[2414]: E0620 19:10:32.951498 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.177.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-2-fda0fd8fee?timeout=10s\": dial tcp 168.119.177.47:6443: connect: connection refused" interval="1.6s" Jun 20 19:10:32.953460 containerd[1493]: time="2025-06-20T19:10:32.953084519Z" level=info msg="StartContainer for \"0a74d0117eb04274ee76f4210e2eede9cfa93a18dd95c43cfa6ef48279042069\" returns successfully" Jun 20 19:10:32.957132 kubelet[2414]: E0620 19:10:32.957063 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.177.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.177.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:10:32.962370 containerd[1493]: time="2025-06-20T19:10:32.962322674Z" level=info msg="StartContainer for \"401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634\" returns successfully" Jun 20 19:10:33.142656 kubelet[2414]: I0620 19:10:33.140082 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:33.588340 kubelet[2414]: E0620 19:10:33.588134 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:33.592569 kubelet[2414]: E0620 19:10:33.591645 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:33.593804 kubelet[2414]: E0620 19:10:33.593654 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:34.594406 kubelet[2414]: E0620 19:10:34.594092 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:34.596764 kubelet[2414]: E0620 19:10:34.596592 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.398558 kubelet[2414]: E0620 19:10:35.397655 2414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-2-fda0fd8fee\" not found" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.539589 kubelet[2414]: I0620 19:10:35.539144 2414 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.539589 kubelet[2414]: E0620 19:10:35.539181 2414 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-2-fda0fd8fee\": node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:35.576315 kubelet[2414]: E0620 19:10:35.576260 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:35.677233 kubelet[2414]: E0620 19:10:35.677166 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:35.778024 kubelet[2414]: E0620 19:10:35.777976 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:35.846370 kubelet[2414]: I0620 19:10:35.845841 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.851743 kubelet[2414]: E0620 19:10:35.851709 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.852142 kubelet[2414]: I0620 19:10:35.851909 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.854566 kubelet[2414]: E0620 19:10:35.854508 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.855311 kubelet[2414]: I0620 19:10:35.854640 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:35.857125 kubelet[2414]: E0620 19:10:35.857085 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-2-fda0fd8fee\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:36.523096 kubelet[2414]: I0620 19:10:36.522855 2414 apiserver.go:52] "Watching apiserver" Jun 20 19:10:36.549686 kubelet[2414]: I0620 19:10:36.549643 2414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:10:37.912718 systemd[1]: Reload requested from client PID 2702 ('systemctl') (unit session-7.scope)... Jun 20 19:10:37.913225 systemd[1]: Reloading... Jun 20 19:10:38.017578 zram_generator::config[2756]: No configuration found. Jun 20 19:10:38.108126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:38.213633 systemd[1]: Reloading finished in 299 ms. Jun 20 19:10:38.240078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:38.253147 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:10:38.254627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:38.254718 systemd[1]: kubelet.service: Consumed 1.986s CPU time, 127.7M memory peak. Jun 20 19:10:38.261016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:38.395895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:38.399211 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:10:38.438575 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:38.438575 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:10:38.438575 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:38.438575 kubelet[2792]: I0620 19:10:38.437958 2792 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:10:38.449279 kubelet[2792]: I0620 19:10:38.449247 2792 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:10:38.450815 kubelet[2792]: I0620 19:10:38.450678 2792 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:10:38.451884 kubelet[2792]: I0620 19:10:38.451847 2792 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:10:38.454509 kubelet[2792]: I0620 19:10:38.453633 2792 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:10:38.456354 kubelet[2792]: I0620 19:10:38.456213 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:10:38.462080 kubelet[2792]: E0620 19:10:38.462029 2792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:10:38.462336 kubelet[2792]: I0620 19:10:38.462313 2792 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:10:38.468204 kubelet[2792]: I0620 19:10:38.467724 2792 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:10:38.468802 kubelet[2792]: I0620 19:10:38.468463 2792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:10:38.468802 kubelet[2792]: I0620 19:10:38.468495 2792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-2-fda0fd8fee","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:10:38.468802 kubelet[2792]: I0620 19:10:38.468677 2792 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:10:38.468802 kubelet[2792]: I0620 19:10:38.468686 2792 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:10:38.468802 kubelet[2792]: I0620 19:10:38.468760 2792 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:38.469242 kubelet[2792]: I0620 19:10:38.469223 2792 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:10:38.469323 kubelet[2792]: I0620 19:10:38.469312 2792 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:10:38.469392 kubelet[2792]: I0620 19:10:38.469382 2792 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:10:38.469440 kubelet[2792]: I0620 19:10:38.469432 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:10:38.474996 kubelet[2792]: I0620 19:10:38.474965 2792 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:10:38.475922 kubelet[2792]: I0620 19:10:38.475881 2792 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:10:38.478460 kubelet[2792]: I0620 19:10:38.478389 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:10:38.478953 kubelet[2792]: I0620 19:10:38.478937 2792 server.go:1289] "Started kubelet" Jun 20 19:10:38.481055 kubelet[2792]: I0620 19:10:38.480945 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:10:38.481633 kubelet[2792]: I0620 19:10:38.481335 2792 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:10:38.481847 kubelet[2792]: I0620 19:10:38.480990 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:10:38.482836 kubelet[2792]: I0620 19:10:38.482815 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:10:38.483639 kubelet[2792]: I0620 19:10:38.483618 2792 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:10:38.496654 kubelet[2792]: I0620 19:10:38.496621 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:10:38.499856 kubelet[2792]: I0620 19:10:38.498198 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:10:38.499856 kubelet[2792]: E0620 19:10:38.498434 2792 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-2-fda0fd8fee\" not found" Jun 20 19:10:38.499856 kubelet[2792]: I0620 19:10:38.499115 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:10:38.499856 kubelet[2792]: I0620 19:10:38.499238 2792 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:10:38.500939 kubelet[2792]: I0620 19:10:38.500917 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:10:38.509503 kubelet[2792]: I0620 19:10:38.509463 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:10:38.509654 kubelet[2792]: I0620 19:10:38.509644 2792 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:10:38.509753 kubelet[2792]: I0620 19:10:38.509742 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:10:38.509821 kubelet[2792]: I0620 19:10:38.509813 2792 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:10:38.509948 kubelet[2792]: E0620 19:10:38.509931 2792 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:10:38.519676 kubelet[2792]: I0620 19:10:38.519649 2792 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:10:38.520064 kubelet[2792]: I0620 19:10:38.520041 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:10:38.525699 kubelet[2792]: I0620 19:10:38.525677 2792 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:10:38.533164 kubelet[2792]: E0620 19:10:38.533142 2792 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:10:38.587034 kubelet[2792]: I0620 19:10:38.587008 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:10:38.587237 kubelet[2792]: I0620 19:10:38.587199 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:10:38.587316 kubelet[2792]: I0620 19:10:38.587307 2792 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:38.587524 kubelet[2792]: I0620 19:10:38.587509 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:10:38.587631 kubelet[2792]: I0620 19:10:38.587607 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:10:38.587683 kubelet[2792]: I0620 19:10:38.587674 2792 policy_none.go:49] "None policy: Start" Jun 20 19:10:38.587787 kubelet[2792]: I0620 19:10:38.587775 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:10:38.587851 kubelet[2792]: I0620 19:10:38.587843 2792 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:10:38.588002 kubelet[2792]: I0620 19:10:38.587990 2792 state_mem.go:75] "Updated machine memory state" Jun 20 19:10:38.594479 kubelet[2792]: E0620 19:10:38.594456 2792 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:10:38.595163 kubelet[2792]: I0620 19:10:38.595141 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:10:38.595236 kubelet[2792]: I0620 19:10:38.595160 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:10:38.595584 kubelet[2792]: I0620 19:10:38.595563 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:10:38.597751 kubelet[2792]: E0620 19:10:38.597697 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:10:38.611355 kubelet[2792]: I0620 19:10:38.611292 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.612819 kubelet[2792]: I0620 19:10:38.612784 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.613270 kubelet[2792]: I0620 19:10:38.613243 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699559 kubelet[2792]: I0620 19:10:38.699491 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699559 kubelet[2792]: I0620 19:10:38.699559 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699767 kubelet[2792]: I0620 19:10:38.699592 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699767 kubelet[2792]: I0620 19:10:38.699614 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5327dda4a910b0722f4c63d9901b16e1-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-2-fda0fd8fee\" (UID: \"5327dda4a910b0722f4c63d9901b16e1\") " pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699767 kubelet[2792]: I0620 19:10:38.699634 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699767 kubelet[2792]: I0620 19:10:38.699653 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e561535f9f940b17ad94415027903bc7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-2-fda0fd8fee\" (UID: \"e561535f9f940b17ad94415027903bc7\") " pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699767 kubelet[2792]: I0620 19:10:38.699672 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699921 kubelet[2792]: I0620 19:10:38.699691 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.699921 kubelet[2792]: I0620 19:10:38.699735 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef3303e30b6b25231fb1e6d2d54e3f61-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-2-fda0fd8fee\" (UID: \"ef3303e30b6b25231fb1e6d2d54e3f61\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.700041 kubelet[2792]: I0620 19:10:38.699988 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.711256 kubelet[2792]: I0620 19:10:38.711212 2792 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.711427 kubelet[2792]: I0620 19:10:38.711327 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:38.910057 sudo[2829]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:10:38.911340 sudo[2829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:10:39.363162 sudo[2829]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:39.473934 kubelet[2792]: I0620 19:10:39.473877 2792 apiserver.go:52] "Watching apiserver" Jun 20 19:10:39.499503 kubelet[2792]: I0620 19:10:39.499440 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:10:39.561348 kubelet[2792]: I0620 19:10:39.561306 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:39.571713 kubelet[2792]: E0620 19:10:39.571516 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-2-fda0fd8fee\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" Jun 20 19:10:39.597909 kubelet[2792]: I0620 19:10:39.597341 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-2-fda0fd8fee" podStartSLOduration=1.597326907 podStartE2EDuration="1.597326907s" podCreationTimestamp="2025-06-20 19:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:39.583817683 +0000 UTC m=+1.179770990" watchObservedRunningTime="2025-06-20 19:10:39.597326907 +0000 UTC m=+1.193280174" Jun 20 19:10:39.609503 kubelet[2792]: I0620 19:10:39.609027 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-2-fda0fd8fee" podStartSLOduration=1.609008041 podStartE2EDuration="1.609008041s" podCreationTimestamp="2025-06-20 19:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:39.607722902 +0000 UTC m=+1.203676169" watchObservedRunningTime="2025-06-20 19:10:39.609008041 +0000 UTC m=+1.204961308" Jun 20 19:10:39.610171 kubelet[2792]: I0620 19:10:39.609473 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-2-fda0fd8fee" podStartSLOduration=1.609460754 podStartE2EDuration="1.609460754s" podCreationTimestamp="2025-06-20 19:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:39.597506345 +0000 UTC m=+1.193459612" watchObservedRunningTime="2025-06-20 19:10:39.609460754 +0000 UTC m=+1.205414021" Jun 20 19:10:41.683128 sudo[1874]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:41.842885 sshd[1873]: Connection closed by 147.75.109.163 port 37066 Jun 20 19:10:41.843850 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:41.850054 systemd[1]: sshd@6-168.119.177.47:22-147.75.109.163:37066.service: Deactivated successfully. Jun 20 19:10:41.852981 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:10:41.853395 systemd[1]: session-7.scope: Consumed 8.197s CPU time, 266M memory peak. Jun 20 19:10:41.857251 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:10:41.859263 systemd-logind[1473]: Removed session 7. Jun 20 19:10:45.708019 kubelet[2792]: I0620 19:10:45.707967 2792 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:10:45.708789 containerd[1493]: time="2025-06-20T19:10:45.708720631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:10:45.709239 kubelet[2792]: I0620 19:10:45.708964 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:10:46.743987 systemd[1]: Created slice kubepods-besteffort-podcc2f8586_6f8b_4c84_a854_5f1caee52330.slice - libcontainer container kubepods-besteffort-podcc2f8586_6f8b_4c84_a854_5f1caee52330.slice. Jun 20 19:10:46.752661 kubelet[2792]: I0620 19:10:46.752413 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc2f8586-6f8b-4c84-a854-5f1caee52330-kube-proxy\") pod \"kube-proxy-nj6v5\" (UID: \"cc2f8586-6f8b-4c84-a854-5f1caee52330\") " pod="kube-system/kube-proxy-nj6v5" Jun 20 19:10:46.752661 kubelet[2792]: I0620 19:10:46.752446 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-bpf-maps\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.752661 kubelet[2792]: I0620 19:10:46.752463 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cni-path\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.752661 kubelet[2792]: I0620 19:10:46.752479 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-etc-cni-netd\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.752661 kubelet[2792]: I0620 19:10:46.752508 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-lib-modules\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.753860 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-xtables-lock\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.753944 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/977a182c-e23a-49dd-a385-900aa02f271f-cilium-config-path\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.753962 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-net\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.753985 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-kernel\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.754005 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-hubble-tls\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754282 kubelet[2792]: I0620 19:10:46.754021 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-run\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754039 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-hostproc\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754074 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-cgroup\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754094 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/977a182c-e23a-49dd-a385-900aa02f271f-clustermesh-secrets\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754109 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd2d\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-kube-api-access-lnd2d\") pod \"cilium-wbcws\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " pod="kube-system/cilium-wbcws" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754124 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc2f8586-6f8b-4c84-a854-5f1caee52330-xtables-lock\") pod \"kube-proxy-nj6v5\" (UID: \"cc2f8586-6f8b-4c84-a854-5f1caee52330\") " pod="kube-system/kube-proxy-nj6v5" Jun 20 19:10:46.754526 kubelet[2792]: I0620 19:10:46.754149 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc2f8586-6f8b-4c84-a854-5f1caee52330-lib-modules\") pod \"kube-proxy-nj6v5\" (UID: \"cc2f8586-6f8b-4c84-a854-5f1caee52330\") " pod="kube-system/kube-proxy-nj6v5" Jun 20 19:10:46.758104 kubelet[2792]: I0620 19:10:46.754163 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrmg\" (UniqueName: \"kubernetes.io/projected/cc2f8586-6f8b-4c84-a854-5f1caee52330-kube-api-access-bdrmg\") pod \"kube-proxy-nj6v5\" (UID: \"cc2f8586-6f8b-4c84-a854-5f1caee52330\") " pod="kube-system/kube-proxy-nj6v5" Jun 20 19:10:46.764188 systemd[1]: Created slice kubepods-burstable-pod977a182c_e23a_49dd_a385_900aa02f271f.slice - libcontainer container kubepods-burstable-pod977a182c_e23a_49dd_a385_900aa02f271f.slice. Jun 20 19:10:46.939833 systemd[1]: Created slice kubepods-besteffort-pod65062899_4aaa_4263_b605_324a1df5c558.slice - libcontainer container kubepods-besteffort-pod65062899_4aaa_4263_b605_324a1df5c558.slice. Jun 20 19:10:46.955930 kubelet[2792]: I0620 19:10:46.955173 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65062899-4aaa-4263-b605-324a1df5c558-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sgxf2\" (UID: \"65062899-4aaa-4263-b605-324a1df5c558\") " pod="kube-system/cilium-operator-6c4d7847fc-sgxf2" Jun 20 19:10:46.955930 kubelet[2792]: I0620 19:10:46.955228 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmkf\" (UniqueName: \"kubernetes.io/projected/65062899-4aaa-4263-b605-324a1df5c558-kube-api-access-mxmkf\") pod \"cilium-operator-6c4d7847fc-sgxf2\" (UID: \"65062899-4aaa-4263-b605-324a1df5c558\") " pod="kube-system/cilium-operator-6c4d7847fc-sgxf2" Jun 20 19:10:47.061215 containerd[1493]: time="2025-06-20T19:10:47.061046556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj6v5,Uid:cc2f8586-6f8b-4c84-a854-5f1caee52330,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:47.070818 containerd[1493]: time="2025-06-20T19:10:47.070696206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wbcws,Uid:977a182c-e23a-49dd-a385-900aa02f271f,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:47.098448 containerd[1493]: time="2025-06-20T19:10:47.097893308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:47.098448 containerd[1493]: time="2025-06-20T19:10:47.098008629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:47.098448 containerd[1493]: time="2025-06-20T19:10:47.098038709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.098448 containerd[1493]: time="2025-06-20T19:10:47.098159630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.107514 containerd[1493]: time="2025-06-20T19:10:47.106839715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:47.107514 containerd[1493]: time="2025-06-20T19:10:47.107362437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:47.107514 containerd[1493]: time="2025-06-20T19:10:47.107375958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.107514 containerd[1493]: time="2025-06-20T19:10:47.107459678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.133777 systemd[1]: Started cri-containerd-7d319eaa4076ca6714f21458ca772af784c942733e263b6643e12807697880b0.scope - libcontainer container 7d319eaa4076ca6714f21458ca772af784c942733e263b6643e12807697880b0. Jun 20 19:10:47.138447 systemd[1]: Started cri-containerd-96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32.scope - libcontainer container 96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32. Jun 20 19:10:47.172221 containerd[1493]: time="2025-06-20T19:10:47.172015614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj6v5,Uid:cc2f8586-6f8b-4c84-a854-5f1caee52330,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d319eaa4076ca6714f21458ca772af784c942733e263b6643e12807697880b0\"" Jun 20 19:10:47.177116 containerd[1493]: time="2025-06-20T19:10:47.175991355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wbcws,Uid:977a182c-e23a-49dd-a385-900aa02f271f,Namespace:kube-system,Attempt:0,} returns sandbox id \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\"" Jun 20 19:10:47.183040 containerd[1493]: time="2025-06-20T19:10:47.182367988Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:10:47.184356 containerd[1493]: time="2025-06-20T19:10:47.183807156Z" level=info msg="CreateContainer within sandbox \"7d319eaa4076ca6714f21458ca772af784c942733e263b6643e12807697880b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:10:47.208842 containerd[1493]: time="2025-06-20T19:10:47.206893156Z" level=info msg="CreateContainer within sandbox \"7d319eaa4076ca6714f21458ca772af784c942733e263b6643e12807697880b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85918dceac513fd9834c0a94d760710079271268f09ccf53567d2598afdb7e2b\"" Jun 20 19:10:47.210115 containerd[1493]: time="2025-06-20T19:10:47.209133928Z" level=info msg="StartContainer for \"85918dceac513fd9834c0a94d760710079271268f09ccf53567d2598afdb7e2b\"" Jun 20 19:10:47.237806 systemd[1]: Started cri-containerd-85918dceac513fd9834c0a94d760710079271268f09ccf53567d2598afdb7e2b.scope - libcontainer container 85918dceac513fd9834c0a94d760710079271268f09ccf53567d2598afdb7e2b. Jun 20 19:10:47.248316 containerd[1493]: time="2025-06-20T19:10:47.248281572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sgxf2,Uid:65062899-4aaa-4263-b605-324a1df5c558,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:47.280818 containerd[1493]: time="2025-06-20T19:10:47.280673581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:47.280818 containerd[1493]: time="2025-06-20T19:10:47.280770621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:47.280818 containerd[1493]: time="2025-06-20T19:10:47.280783141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.281483 containerd[1493]: time="2025-06-20T19:10:47.281294904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:47.282165 containerd[1493]: time="2025-06-20T19:10:47.282004628Z" level=info msg="StartContainer for \"85918dceac513fd9834c0a94d760710079271268f09ccf53567d2598afdb7e2b\" returns successfully" Jun 20 19:10:47.312996 systemd[1]: Started cri-containerd-e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230.scope - libcontainer container e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230. Jun 20 19:10:47.355488 containerd[1493]: time="2025-06-20T19:10:47.355432330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sgxf2,Uid:65062899-4aaa-4263-b605-324a1df5c558,Namespace:kube-system,Attempt:0,} returns sandbox id \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\"" Jun 20 19:10:47.593743 kubelet[2792]: I0620 19:10:47.593456 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nj6v5" podStartSLOduration=1.593438091 podStartE2EDuration="1.593438091s" podCreationTimestamp="2025-06-20 19:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:47.592786607 +0000 UTC m=+9.188739874" watchObservedRunningTime="2025-06-20 19:10:47.593438091 +0000 UTC m=+9.189391398" Jun 20 19:10:51.361141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293378283.mount: Deactivated successfully. Jun 20 19:10:53.125685 containerd[1493]: time="2025-06-20T19:10:53.125617465Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:53.127819 containerd[1493]: time="2025-06-20T19:10:53.127265575Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 19:10:53.127819 containerd[1493]: time="2025-06-20T19:10:53.127744863Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:53.130464 containerd[1493]: time="2025-06-20T19:10:53.130259788Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.947843679s" Jun 20 19:10:53.130464 containerd[1493]: time="2025-06-20T19:10:53.130322549Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 19:10:53.132668 containerd[1493]: time="2025-06-20T19:10:53.132274864Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:10:53.138294 containerd[1493]: time="2025-06-20T19:10:53.138236971Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:10:53.154508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716292073.mount: Deactivated successfully. Jun 20 19:10:53.162431 containerd[1493]: time="2025-06-20T19:10:53.162368763Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\"" Jun 20 19:10:53.164075 containerd[1493]: time="2025-06-20T19:10:53.163863270Z" level=info msg="StartContainer for \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\"" Jun 20 19:10:53.207783 systemd[1]: Started cri-containerd-f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9.scope - libcontainer container f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9. Jun 20 19:10:53.243951 containerd[1493]: time="2025-06-20T19:10:53.243891824Z" level=info msg="StartContainer for \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\" returns successfully" Jun 20 19:10:53.258470 systemd[1]: cri-containerd-f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9.scope: Deactivated successfully. Jun 20 19:10:53.451842 containerd[1493]: time="2025-06-20T19:10:53.451719306Z" level=info msg="shim disconnected" id=f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9 namespace=k8s.io Jun 20 19:10:53.451842 containerd[1493]: time="2025-06-20T19:10:53.451797147Z" level=warning msg="cleaning up after shim disconnected" id=f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9 namespace=k8s.io Jun 20 19:10:53.451842 containerd[1493]: time="2025-06-20T19:10:53.451807108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:53.606659 containerd[1493]: time="2025-06-20T19:10:53.606035350Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:10:53.632365 containerd[1493]: time="2025-06-20T19:10:53.632297701Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\"" Jun 20 19:10:53.634646 containerd[1493]: time="2025-06-20T19:10:53.633728686Z" level=info msg="StartContainer for \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\"" Jun 20 19:10:53.665769 systemd[1]: Started cri-containerd-e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce.scope - libcontainer container e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce. Jun 20 19:10:53.696291 containerd[1493]: time="2025-06-20T19:10:53.696140084Z" level=info msg="StartContainer for \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\" returns successfully" Jun 20 19:10:53.719850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:10:53.720705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:53.721663 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:53.728921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:53.729116 systemd[1]: cri-containerd-e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce.scope: Deactivated successfully. Jun 20 19:10:53.753399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:53.767657 containerd[1493]: time="2025-06-20T19:10:53.767370360Z" level=info msg="shim disconnected" id=e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce namespace=k8s.io Jun 20 19:10:53.767657 containerd[1493]: time="2025-06-20T19:10:53.767449561Z" level=warning msg="cleaning up after shim disconnected" id=e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce namespace=k8s.io Jun 20 19:10:53.767657 containerd[1493]: time="2025-06-20T19:10:53.767465242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:54.148820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9-rootfs.mount: Deactivated successfully. Jun 20 19:10:54.616068 containerd[1493]: time="2025-06-20T19:10:54.616023441Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:10:54.643428 containerd[1493]: time="2025-06-20T19:10:54.643371862Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\"" Jun 20 19:10:54.644460 containerd[1493]: time="2025-06-20T19:10:54.644419843Z" level=info msg="StartContainer for \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\"" Jun 20 19:10:54.685955 systemd[1]: Started cri-containerd-51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea.scope - libcontainer container 51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea. Jun 20 19:10:54.734932 containerd[1493]: time="2025-06-20T19:10:54.734889715Z" level=info msg="StartContainer for \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\" returns successfully" Jun 20 19:10:54.737876 systemd[1]: cri-containerd-51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea.scope: Deactivated successfully. Jun 20 19:10:54.790252 containerd[1493]: time="2025-06-20T19:10:54.790178930Z" level=info msg="shim disconnected" id=51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea namespace=k8s.io Jun 20 19:10:54.790252 containerd[1493]: time="2025-06-20T19:10:54.790241211Z" level=warning msg="cleaning up after shim disconnected" id=51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea namespace=k8s.io Jun 20 19:10:54.790252 containerd[1493]: time="2025-06-20T19:10:54.790253731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:55.149475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea-rootfs.mount: Deactivated successfully. Jun 20 19:10:55.153352 containerd[1493]: time="2025-06-20T19:10:55.153300198Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:55.155348 containerd[1493]: time="2025-06-20T19:10:55.155287121Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 19:10:55.156972 containerd[1493]: time="2025-06-20T19:10:55.156925277Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:55.158527 containerd[1493]: time="2025-06-20T19:10:55.158472070Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.026142445s" Jun 20 19:10:55.158527 containerd[1493]: time="2025-06-20T19:10:55.158516471Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 19:10:55.165595 containerd[1493]: time="2025-06-20T19:10:55.165492182Z" level=info msg="CreateContainer within sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:10:55.187885 containerd[1493]: time="2025-06-20T19:10:55.187732503Z" level=info msg="CreateContainer within sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\"" Jun 20 19:10:55.188847 containerd[1493]: time="2025-06-20T19:10:55.188705004Z" level=info msg="StartContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\"" Jun 20 19:10:55.223866 systemd[1]: Started cri-containerd-73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb.scope - libcontainer container 73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb. Jun 20 19:10:55.255862 containerd[1493]: time="2025-06-20T19:10:55.255530330Z" level=info msg="StartContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" returns successfully" Jun 20 19:10:55.621570 containerd[1493]: time="2025-06-20T19:10:55.621071799Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:10:55.636926 containerd[1493]: time="2025-06-20T19:10:55.636846060Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\"" Jun 20 19:10:55.637375 containerd[1493]: time="2025-06-20T19:10:55.637327390Z" level=info msg="StartContainer for \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\"" Jun 20 19:10:55.681839 systemd[1]: Started cri-containerd-a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf.scope - libcontainer container a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf. Jun 20 19:10:55.703912 kubelet[2792]: I0620 19:10:55.703837 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sgxf2" podStartSLOduration=1.90065953 podStartE2EDuration="9.703819469s" podCreationTimestamp="2025-06-20 19:10:46 +0000 UTC" firstStartedPulling="2025-06-20 19:10:47.357631822 +0000 UTC m=+8.953585089" lastFinishedPulling="2025-06-20 19:10:55.160791721 +0000 UTC m=+16.756745028" observedRunningTime="2025-06-20 19:10:55.656492765 +0000 UTC m=+17.252446032" watchObservedRunningTime="2025-06-20 19:10:55.703819469 +0000 UTC m=+17.299772736" Jun 20 19:10:55.755659 systemd[1]: cri-containerd-a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf.scope: Deactivated successfully. Jun 20 19:10:55.759015 containerd[1493]: time="2025-06-20T19:10:55.758965382Z" level=info msg="StartContainer for \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\" returns successfully" Jun 20 19:10:55.839249 containerd[1493]: time="2025-06-20T19:10:55.839138076Z" level=info msg="shim disconnected" id=a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf namespace=k8s.io Jun 20 19:10:55.839249 containerd[1493]: time="2025-06-20T19:10:55.839218118Z" level=warning msg="cleaning up after shim disconnected" id=a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf namespace=k8s.io Jun 20 19:10:55.839249 containerd[1493]: time="2025-06-20T19:10:55.839226918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:56.629387 containerd[1493]: time="2025-06-20T19:10:56.629211764Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:10:56.648064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156330403.mount: Deactivated successfully. Jun 20 19:10:56.648937 containerd[1493]: time="2025-06-20T19:10:56.648729101Z" level=info msg="CreateContainer within sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\"" Jun 20 19:10:56.650413 containerd[1493]: time="2025-06-20T19:10:56.649848727Z" level=info msg="StartContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\"" Jun 20 19:10:56.690794 systemd[1]: Started cri-containerd-21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554.scope - libcontainer container 21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554. Jun 20 19:10:56.725226 containerd[1493]: time="2025-06-20T19:10:56.725062528Z" level=info msg="StartContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" returns successfully" Jun 20 19:10:56.844239 kubelet[2792]: I0620 19:10:56.844198 2792 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:10:56.901162 systemd[1]: Created slice kubepods-burstable-pod8e5f0d14_63e3_41fe_9b9b_189353aa5977.slice - libcontainer container kubepods-burstable-pod8e5f0d14_63e3_41fe_9b9b_189353aa5977.slice. Jun 20 19:10:56.908328 systemd[1]: Created slice kubepods-burstable-podda85cc82_12f6_4983_92e5_0abe3beead84.slice - libcontainer container kubepods-burstable-podda85cc82_12f6_4983_92e5_0abe3beead84.slice. Jun 20 19:10:56.926116 kubelet[2792]: I0620 19:10:56.925740 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da85cc82-12f6-4983-92e5-0abe3beead84-config-volume\") pod \"coredns-674b8bbfcf-q7ct7\" (UID: \"da85cc82-12f6-4983-92e5-0abe3beead84\") " pod="kube-system/coredns-674b8bbfcf-q7ct7" Jun 20 19:10:56.926116 kubelet[2792]: I0620 19:10:56.925792 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmsbv\" (UniqueName: \"kubernetes.io/projected/8e5f0d14-63e3-41fe-9b9b-189353aa5977-kube-api-access-jmsbv\") pod \"coredns-674b8bbfcf-9g8wl\" (UID: \"8e5f0d14-63e3-41fe-9b9b-189353aa5977\") " pod="kube-system/coredns-674b8bbfcf-9g8wl" Jun 20 19:10:56.926116 kubelet[2792]: I0620 19:10:56.925811 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkcg\" (UniqueName: \"kubernetes.io/projected/da85cc82-12f6-4983-92e5-0abe3beead84-kube-api-access-9kkcg\") pod \"coredns-674b8bbfcf-q7ct7\" (UID: \"da85cc82-12f6-4983-92e5-0abe3beead84\") " pod="kube-system/coredns-674b8bbfcf-q7ct7" Jun 20 19:10:56.926116 kubelet[2792]: I0620 19:10:56.925828 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5f0d14-63e3-41fe-9b9b-189353aa5977-config-volume\") pod \"coredns-674b8bbfcf-9g8wl\" (UID: \"8e5f0d14-63e3-41fe-9b9b-189353aa5977\") " pod="kube-system/coredns-674b8bbfcf-9g8wl" Jun 20 19:10:57.152799 systemd[1]: run-containerd-runc-k8s.io-21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554-runc.TJJqfk.mount: Deactivated successfully. Jun 20 19:10:57.208684 containerd[1493]: time="2025-06-20T19:10:57.208226074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9g8wl,Uid:8e5f0d14-63e3-41fe-9b9b-189353aa5977,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:57.213663 containerd[1493]: time="2025-06-20T19:10:57.213487166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7ct7,Uid:da85cc82-12f6-4983-92e5-0abe3beead84,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:57.654326 kubelet[2792]: I0620 19:10:57.654192 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wbcws" podStartSLOduration=5.7042899689999995 podStartE2EDuration="11.65417344s" podCreationTimestamp="2025-06-20 19:10:46 +0000 UTC" firstStartedPulling="2025-06-20 19:10:47.181986986 +0000 UTC m=+8.777940253" lastFinishedPulling="2025-06-20 19:10:53.131870337 +0000 UTC m=+14.727823724" observedRunningTime="2025-06-20 19:10:57.652690923 +0000 UTC m=+19.248644230" watchObservedRunningTime="2025-06-20 19:10:57.65417344 +0000 UTC m=+19.250126707" Jun 20 19:10:58.996677 systemd-networkd[1398]: cilium_host: Link UP Jun 20 19:10:58.997921 systemd-networkd[1398]: cilium_net: Link UP Jun 20 19:10:58.998989 systemd-networkd[1398]: cilium_net: Gained carrier Jun 20 19:10:58.999167 systemd-networkd[1398]: cilium_host: Gained carrier Jun 20 19:10:58.999269 systemd-networkd[1398]: cilium_host: Gained IPv6LL Jun 20 19:10:59.119412 systemd-networkd[1398]: cilium_vxlan: Link UP Jun 20 19:10:59.119845 systemd-networkd[1398]: cilium_vxlan: Gained carrier Jun 20 19:10:59.408928 kernel: NET: Registered PF_ALG protocol family Jun 20 19:10:59.978817 systemd-networkd[1398]: cilium_net: Gained IPv6LL Jun 20 19:11:00.141899 systemd-networkd[1398]: lxc_health: Link UP Jun 20 19:11:00.148336 systemd-networkd[1398]: lxc_health: Gained carrier Jun 20 19:11:00.291812 systemd-networkd[1398]: lxc361044fce9f1: Link UP Jun 20 19:11:00.296582 kernel: eth0: renamed from tmp0f547 Jun 20 19:11:00.303153 kernel: eth0: renamed from tmp96a1f Jun 20 19:11:00.308144 systemd-networkd[1398]: lxcac4beddd1dd1: Link UP Jun 20 19:11:00.312357 systemd-networkd[1398]: lxc361044fce9f1: Gained carrier Jun 20 19:11:00.312782 systemd-networkd[1398]: lxcac4beddd1dd1: Gained carrier Jun 20 19:11:00.746738 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Jun 20 19:11:01.579949 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jun 20 19:11:01.707820 systemd-networkd[1398]: lxc361044fce9f1: Gained IPv6LL Jun 20 19:11:02.349456 systemd-networkd[1398]: lxcac4beddd1dd1: Gained IPv6LL Jun 20 19:11:02.566944 systemd[1]: Started sshd@7-168.119.177.47:22-64.62.156.115:7635.service - OpenSSH per-connection server daemon (64.62.156.115:7635). Jun 20 19:11:03.344581 sshd[4013]: Invalid user from 64.62.156.115 port 7635 Jun 20 19:11:04.262912 containerd[1493]: time="2025-06-20T19:11:04.262127234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:04.262912 containerd[1493]: time="2025-06-20T19:11:04.262196596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:04.262912 containerd[1493]: time="2025-06-20T19:11:04.262212717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:04.263803 containerd[1493]: time="2025-06-20T19:11:04.263674609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:04.301301 containerd[1493]: time="2025-06-20T19:11:04.297628583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:04.301301 containerd[1493]: time="2025-06-20T19:11:04.297759708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:04.301301 containerd[1493]: time="2025-06-20T19:11:04.297773628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:04.301301 containerd[1493]: time="2025-06-20T19:11:04.297866591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:04.302341 systemd[1]: Started cri-containerd-0f547b64f0917964d8dccb614a2469dc3113559e6cf0f929a085e5acb6c37f46.scope - libcontainer container 0f547b64f0917964d8dccb614a2469dc3113559e6cf0f929a085e5acb6c37f46. Jun 20 19:11:04.333625 systemd[1]: run-containerd-runc-k8s.io-96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8-runc.2SOncy.mount: Deactivated successfully. Jun 20 19:11:04.344295 systemd[1]: Started cri-containerd-96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8.scope - libcontainer container 96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8. Jun 20 19:11:04.381308 containerd[1493]: time="2025-06-20T19:11:04.381253933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7ct7,Uid:da85cc82-12f6-4983-92e5-0abe3beead84,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8\"" Jun 20 19:11:04.392648 containerd[1493]: time="2025-06-20T19:11:04.392138642Z" level=info msg="CreateContainer within sandbox \"96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:04.401566 containerd[1493]: time="2025-06-20T19:11:04.401404773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9g8wl,Uid:8e5f0d14-63e3-41fe-9b9b-189353aa5977,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f547b64f0917964d8dccb614a2469dc3113559e6cf0f929a085e5acb6c37f46\"" Jun 20 19:11:04.407092 containerd[1493]: time="2025-06-20T19:11:04.406508635Z" level=info msg="CreateContainer within sandbox \"0f547b64f0917964d8dccb614a2469dc3113559e6cf0f929a085e5acb6c37f46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:04.430075 containerd[1493]: time="2025-06-20T19:11:04.429964434Z" level=info msg="CreateContainer within sandbox \"96a1f1aa1be54e77e7ede1714bfcb994e91e046bb9aac947c13aaaba3578dce8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c4c45dbae9782ca5392da970386cca739f7db1aec4d0a2349c5edf398da6394\"" Jun 20 19:11:04.432420 containerd[1493]: time="2025-06-20T19:11:04.431948625Z" level=info msg="StartContainer for \"0c4c45dbae9782ca5392da970386cca739f7db1aec4d0a2349c5edf398da6394\"" Jun 20 19:11:04.439119 containerd[1493]: time="2025-06-20T19:11:04.439067279Z" level=info msg="CreateContainer within sandbox \"0f547b64f0917964d8dccb614a2469dc3113559e6cf0f929a085e5acb6c37f46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2df8a20049b030fd3cf281dc72e3782d35da3461f477b7b0c3704840db10fb1\"" Jun 20 19:11:04.440017 containerd[1493]: time="2025-06-20T19:11:04.439980192Z" level=info msg="StartContainer for \"b2df8a20049b030fd3cf281dc72e3782d35da3461f477b7b0c3704840db10fb1\"" Jun 20 19:11:04.481752 systemd[1]: Started cri-containerd-0c4c45dbae9782ca5392da970386cca739f7db1aec4d0a2349c5edf398da6394.scope - libcontainer container 0c4c45dbae9782ca5392da970386cca739f7db1aec4d0a2349c5edf398da6394. Jun 20 19:11:04.491717 systemd[1]: Started cri-containerd-b2df8a20049b030fd3cf281dc72e3782d35da3461f477b7b0c3704840db10fb1.scope - libcontainer container b2df8a20049b030fd3cf281dc72e3782d35da3461f477b7b0c3704840db10fb1. Jun 20 19:11:04.554029 containerd[1493]: time="2025-06-20T19:11:04.553631975Z" level=info msg="StartContainer for \"0c4c45dbae9782ca5392da970386cca739f7db1aec4d0a2349c5edf398da6394\" returns successfully" Jun 20 19:11:04.554029 containerd[1493]: time="2025-06-20T19:11:04.553882064Z" level=info msg="StartContainer for \"b2df8a20049b030fd3cf281dc72e3782d35da3461f477b7b0c3704840db10fb1\" returns successfully" Jun 20 19:11:04.685005 kubelet[2792]: I0620 19:11:04.684026 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q7ct7" podStartSLOduration=18.684008356 podStartE2EDuration="18.684008356s" podCreationTimestamp="2025-06-20 19:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:04.677245435 +0000 UTC m=+26.273198702" watchObservedRunningTime="2025-06-20 19:11:04.684008356 +0000 UTC m=+26.279961623" Jun 20 19:11:04.739219 kubelet[2792]: I0620 19:11:04.738328 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9g8wl" podStartSLOduration=18.738302857 podStartE2EDuration="18.738302857s" podCreationTimestamp="2025-06-20 19:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:04.708464711 +0000 UTC m=+26.304417978" watchObservedRunningTime="2025-06-20 19:11:04.738302857 +0000 UTC m=+26.334256124" Jun 20 19:11:06.547834 sshd[4013]: Connection closed by invalid user 64.62.156.115 port 7635 [preauth] Jun 20 19:11:06.551022 systemd[1]: sshd@7-168.119.177.47:22-64.62.156.115:7635.service: Deactivated successfully. Jun 20 19:15:17.927222 systemd[1]: Started sshd@8-168.119.177.47:22-147.75.109.163:56884.service - OpenSSH per-connection server daemon (147.75.109.163:56884). Jun 20 19:15:18.916957 sshd[4232]: Accepted publickey for core from 147.75.109.163 port 56884 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:18.919079 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:18.925699 systemd-logind[1473]: New session 8 of user core. Jun 20 19:15:18.934844 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:15:19.711095 sshd[4234]: Connection closed by 147.75.109.163 port 56884 Jun 20 19:15:19.712205 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:19.717558 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:15:19.718480 systemd[1]: sshd@8-168.119.177.47:22-147.75.109.163:56884.service: Deactivated successfully. Jun 20 19:15:19.721267 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:15:19.722705 systemd-logind[1473]: Removed session 8. Jun 20 19:15:23.879640 update_engine[1477]: I20250620 19:15:23.879019 1477 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 19:15:23.879640 update_engine[1477]: I20250620 19:15:23.879084 1477 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 19:15:23.879640 update_engine[1477]: I20250620 19:15:23.879398 1477 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880005 1477 omaha_request_params.cc:62] Current group set to stable Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880190 1477 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880214 1477 update_attempter.cc:643] Scheduling an action processor start. Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880239 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880285 1477 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880362 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880378 1477 omaha_request_action.cc:272] Request: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: Jun 20 19:15:23.880385 update_engine[1477]: I20250620 19:15:23.880388 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:15:23.881814 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 19:15:23.882979 update_engine[1477]: I20250620 19:15:23.882934 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:15:23.883417 update_engine[1477]: I20250620 19:15:23.883373 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:15:23.886576 update_engine[1477]: E20250620 19:15:23.886480 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:15:23.886690 update_engine[1477]: I20250620 19:15:23.886655 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 19:15:24.898024 systemd[1]: Started sshd@9-168.119.177.47:22-147.75.109.163:56888.service - OpenSSH per-connection server daemon (147.75.109.163:56888). Jun 20 19:15:25.904969 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 56888 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:25.906399 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:25.913702 systemd-logind[1473]: New session 9 of user core. Jun 20 19:15:25.918796 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:15:26.684460 sshd[4249]: Connection closed by 147.75.109.163 port 56888 Jun 20 19:15:26.685338 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:26.690661 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:15:26.691442 systemd[1]: sshd@9-168.119.177.47:22-147.75.109.163:56888.service: Deactivated successfully. Jun 20 19:15:26.694117 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:15:26.697128 systemd-logind[1473]: Removed session 9. Jun 20 19:15:31.864033 systemd[1]: Started sshd@10-168.119.177.47:22-147.75.109.163:53310.service - OpenSSH per-connection server daemon (147.75.109.163:53310). Jun 20 19:15:32.856743 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 53310 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:32.858939 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:32.864910 systemd-logind[1473]: New session 10 of user core. Jun 20 19:15:32.870735 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:15:33.624076 sshd[4264]: Connection closed by 147.75.109.163 port 53310 Jun 20 19:15:33.623291 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:33.628939 systemd[1]: sshd@10-168.119.177.47:22-147.75.109.163:53310.service: Deactivated successfully. Jun 20 19:15:33.632355 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:15:33.633507 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:15:33.634841 systemd-logind[1473]: Removed session 10. Jun 20 19:15:33.805226 systemd[1]: Started sshd@11-168.119.177.47:22-147.75.109.163:53318.service - OpenSSH per-connection server daemon (147.75.109.163:53318). Jun 20 19:15:33.875682 update_engine[1477]: I20250620 19:15:33.875416 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:15:33.877434 update_engine[1477]: I20250620 19:15:33.876991 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:15:33.877434 update_engine[1477]: I20250620 19:15:33.877362 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:15:33.878060 update_engine[1477]: E20250620 19:15:33.877775 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:15:33.878060 update_engine[1477]: I20250620 19:15:33.877861 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 19:15:34.799322 sshd[4277]: Accepted publickey for core from 147.75.109.163 port 53318 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:34.802230 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:34.809486 systemd-logind[1473]: New session 11 of user core. Jun 20 19:15:34.821848 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:15:35.699574 sshd[4279]: Connection closed by 147.75.109.163 port 53318 Jun 20 19:15:35.700362 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:35.705767 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:15:35.707119 systemd[1]: sshd@11-168.119.177.47:22-147.75.109.163:53318.service: Deactivated successfully. Jun 20 19:15:35.712835 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:15:35.716631 systemd-logind[1473]: Removed session 11. Jun 20 19:15:35.886924 systemd[1]: Started sshd@12-168.119.177.47:22-147.75.109.163:53320.service - OpenSSH per-connection server daemon (147.75.109.163:53320). Jun 20 19:15:36.892004 sshd[4289]: Accepted publickey for core from 147.75.109.163 port 53320 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:36.894625 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:36.900638 systemd-logind[1473]: New session 12 of user core. Jun 20 19:15:36.906845 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:15:37.662649 sshd[4291]: Connection closed by 147.75.109.163 port 53320 Jun 20 19:15:37.663512 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:37.668493 systemd[1]: sshd@12-168.119.177.47:22-147.75.109.163:53320.service: Deactivated successfully. Jun 20 19:15:37.672024 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:15:37.674642 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:15:37.675811 systemd-logind[1473]: Removed session 12. Jun 20 19:15:42.842860 systemd[1]: Started sshd@13-168.119.177.47:22-147.75.109.163:60934.service - OpenSSH per-connection server daemon (147.75.109.163:60934). Jun 20 19:15:43.840968 sshd[4305]: Accepted publickey for core from 147.75.109.163 port 60934 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:43.842989 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:43.849479 systemd-logind[1473]: New session 13 of user core. Jun 20 19:15:43.856810 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:15:43.875694 update_engine[1477]: I20250620 19:15:43.875186 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:15:43.875694 update_engine[1477]: I20250620 19:15:43.875414 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:15:43.875694 update_engine[1477]: I20250620 19:15:43.875650 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:15:43.876284 update_engine[1477]: E20250620 19:15:43.876178 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:15:43.876284 update_engine[1477]: I20250620 19:15:43.876227 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 19:15:44.608727 sshd[4307]: Connection closed by 147.75.109.163 port 60934 Jun 20 19:15:44.609930 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:44.615395 systemd[1]: sshd@13-168.119.177.47:22-147.75.109.163:60934.service: Deactivated successfully. Jun 20 19:15:44.618462 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:15:44.620971 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:15:44.622351 systemd-logind[1473]: Removed session 13. Jun 20 19:15:44.790590 systemd[1]: Started sshd@14-168.119.177.47:22-147.75.109.163:60950.service - OpenSSH per-connection server daemon (147.75.109.163:60950). Jun 20 19:15:45.775592 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 60950 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:45.777763 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:45.783161 systemd-logind[1473]: New session 14 of user core. Jun 20 19:15:45.788752 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:15:46.757340 sshd[4321]: Connection closed by 147.75.109.163 port 60950 Jun 20 19:15:46.758339 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:46.764509 systemd[1]: sshd@14-168.119.177.47:22-147.75.109.163:60950.service: Deactivated successfully. Jun 20 19:15:46.767651 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:15:46.769009 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:15:46.770387 systemd-logind[1473]: Removed session 14. Jun 20 19:15:46.935005 systemd[1]: Started sshd@15-168.119.177.47:22-147.75.109.163:54710.service - OpenSSH per-connection server daemon (147.75.109.163:54710). Jun 20 19:15:47.930137 sshd[4331]: Accepted publickey for core from 147.75.109.163 port 54710 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:47.931472 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:47.937405 systemd-logind[1473]: New session 15 of user core. Jun 20 19:15:47.942739 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:15:49.655383 sshd[4336]: Connection closed by 147.75.109.163 port 54710 Jun 20 19:15:49.655940 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:49.661825 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:15:49.662511 systemd[1]: sshd@15-168.119.177.47:22-147.75.109.163:54710.service: Deactivated successfully. Jun 20 19:15:49.667929 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:15:49.669889 systemd-logind[1473]: Removed session 15. Jun 20 19:15:49.840943 systemd[1]: Started sshd@16-168.119.177.47:22-147.75.109.163:54720.service - OpenSSH per-connection server daemon (147.75.109.163:54720). Jun 20 19:15:50.843638 sshd[4353]: Accepted publickey for core from 147.75.109.163 port 54720 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:50.846594 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:50.853612 systemd-logind[1473]: New session 16 of user core. Jun 20 19:15:50.860846 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:15:51.734727 sshd[4355]: Connection closed by 147.75.109.163 port 54720 Jun 20 19:15:51.735285 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:51.741074 systemd[1]: sshd@16-168.119.177.47:22-147.75.109.163:54720.service: Deactivated successfully. Jun 20 19:15:51.744144 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:15:51.745517 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:15:51.748403 systemd-logind[1473]: Removed session 16. Jun 20 19:15:51.910030 systemd[1]: Started sshd@17-168.119.177.47:22-147.75.109.163:54734.service - OpenSSH per-connection server daemon (147.75.109.163:54734). Jun 20 19:15:52.903948 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 54734 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:52.906040 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:52.911379 systemd-logind[1473]: New session 17 of user core. Jun 20 19:15:52.920843 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:15:53.665177 sshd[4367]: Connection closed by 147.75.109.163 port 54734 Jun 20 19:15:53.666108 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:53.670369 systemd[1]: sshd@17-168.119.177.47:22-147.75.109.163:54734.service: Deactivated successfully. Jun 20 19:15:53.673138 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:15:53.674456 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:15:53.676427 systemd-logind[1473]: Removed session 17. Jun 20 19:15:53.875253 update_engine[1477]: I20250620 19:15:53.875118 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:15:53.875959 update_engine[1477]: I20250620 19:15:53.875643 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:15:53.876171 update_engine[1477]: I20250620 19:15:53.876051 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:15:53.876730 update_engine[1477]: E20250620 19:15:53.876647 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:15:53.876852 update_engine[1477]: I20250620 19:15:53.876755 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:15:53.876852 update_engine[1477]: I20250620 19:15:53.876778 1477 omaha_request_action.cc:617] Omaha request response: Jun 20 19:15:53.876950 update_engine[1477]: E20250620 19:15:53.876899 1477 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 19:15:53.876950 update_engine[1477]: I20250620 19:15:53.876931 1477 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 19:15:53.877052 update_engine[1477]: I20250620 19:15:53.876943 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:15:53.877052 update_engine[1477]: I20250620 19:15:53.876955 1477 update_attempter.cc:306] Processing Done. Jun 20 19:15:53.877052 update_engine[1477]: E20250620 19:15:53.876979 1477 update_attempter.cc:619] Update failed. Jun 20 19:15:53.877052 update_engine[1477]: I20250620 19:15:53.876991 1477 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 19:15:53.877052 update_engine[1477]: I20250620 19:15:53.877003 1477 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 19:15:53.877052 update_engine[1477]: I20250620 19:15:53.877015 1477 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 19:15:53.877634 update_engine[1477]: I20250620 19:15:53.877331 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:15:53.877634 update_engine[1477]: I20250620 19:15:53.877413 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:15:53.877634 update_engine[1477]: I20250620 19:15:53.877435 1477 omaha_request_action.cc:272] Request: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: Jun 20 19:15:53.877634 update_engine[1477]: I20250620 19:15:53.877449 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:15:53.878097 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 19:15:53.878670 update_engine[1477]: I20250620 19:15:53.877794 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:15:53.878670 update_engine[1477]: I20250620 19:15:53.878137 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:15:53.878670 update_engine[1477]: E20250620 19:15:53.878499 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:15:53.878670 update_engine[1477]: I20250620 19:15:53.878630 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:15:53.878670 update_engine[1477]: I20250620 19:15:53.878654 1477 omaha_request_action.cc:617] Omaha request response: Jun 20 19:15:53.878670 update_engine[1477]: I20250620 19:15:53.878669 1477 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:15:53.878939 update_engine[1477]: I20250620 19:15:53.878681 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:15:53.878939 update_engine[1477]: I20250620 19:15:53.878692 1477 update_attempter.cc:306] Processing Done. Jun 20 19:15:53.878939 update_engine[1477]: I20250620 19:15:53.878705 1477 update_attempter.cc:310] Error event sent. Jun 20 19:15:53.878939 update_engine[1477]: I20250620 19:15:53.878722 1477 update_check_scheduler.cc:74] Next update check in 44m17s Jun 20 19:15:53.879321 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 19:15:58.847975 systemd[1]: Started sshd@18-168.119.177.47:22-147.75.109.163:53092.service - OpenSSH per-connection server daemon (147.75.109.163:53092). Jun 20 19:15:59.842806 sshd[4381]: Accepted publickey for core from 147.75.109.163 port 53092 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:59.845567 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:59.851357 systemd-logind[1473]: New session 18 of user core. Jun 20 19:15:59.857894 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:16:00.599127 sshd[4383]: Connection closed by 147.75.109.163 port 53092 Jun 20 19:16:00.599479 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:00.605745 systemd[1]: sshd@18-168.119.177.47:22-147.75.109.163:53092.service: Deactivated successfully. Jun 20 19:16:00.608313 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:16:00.609486 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:16:00.611309 systemd-logind[1473]: Removed session 18. Jun 20 19:16:05.774965 systemd[1]: Started sshd@19-168.119.177.47:22-147.75.109.163:53104.service - OpenSSH per-connection server daemon (147.75.109.163:53104). Jun 20 19:16:06.755470 sshd[4395]: Accepted publickey for core from 147.75.109.163 port 53104 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:06.758616 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:06.765031 systemd-logind[1473]: New session 19 of user core. Jun 20 19:16:06.769823 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:16:07.511149 sshd[4397]: Connection closed by 147.75.109.163 port 53104 Jun 20 19:16:07.512229 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:07.517408 systemd[1]: sshd@19-168.119.177.47:22-147.75.109.163:53104.service: Deactivated successfully. Jun 20 19:16:07.519988 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:16:07.521081 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:16:07.522965 systemd-logind[1473]: Removed session 19. Jun 20 19:16:07.687858 systemd[1]: Started sshd@20-168.119.177.47:22-147.75.109.163:42978.service - OpenSSH per-connection server daemon (147.75.109.163:42978). Jun 20 19:16:08.676449 sshd[4409]: Accepted publickey for core from 147.75.109.163 port 42978 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:08.678765 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:08.685888 systemd-logind[1473]: New session 20 of user core. Jun 20 19:16:08.689980 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:16:10.941100 containerd[1493]: time="2025-06-20T19:16:10.941046732Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:16:10.943417 containerd[1493]: time="2025-06-20T19:16:10.943029491Z" level=info msg="StopContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" with timeout 30 (s)" Jun 20 19:16:10.944037 containerd[1493]: time="2025-06-20T19:16:10.943941146Z" level=info msg="Stop container \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" with signal terminated" Jun 20 19:16:10.955537 containerd[1493]: time="2025-06-20T19:16:10.955439995Z" level=info msg="StopContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" with timeout 2 (s)" Jun 20 19:16:10.955944 containerd[1493]: time="2025-06-20T19:16:10.955856060Z" level=info msg="Stop container \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" with signal terminated" Jun 20 19:16:10.957228 systemd[1]: cri-containerd-73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb.scope: Deactivated successfully. Jun 20 19:16:10.969519 systemd-networkd[1398]: lxc_health: Link DOWN Jun 20 19:16:10.969526 systemd-networkd[1398]: lxc_health: Lost carrier Jun 20 19:16:10.990624 systemd[1]: cri-containerd-21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554.scope: Deactivated successfully. Jun 20 19:16:10.991278 systemd[1]: cri-containerd-21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554.scope: Consumed 7.868s CPU time, 124.7M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 19:16:11.004503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb-rootfs.mount: Deactivated successfully. Jun 20 19:16:11.014314 containerd[1493]: time="2025-06-20T19:16:11.014171719Z" level=info msg="shim disconnected" id=73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb namespace=k8s.io Jun 20 19:16:11.014314 containerd[1493]: time="2025-06-20T19:16:11.014238963Z" level=warning msg="cleaning up after shim disconnected" id=73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb namespace=k8s.io Jun 20 19:16:11.014314 containerd[1493]: time="2025-06-20T19:16:11.014247963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:11.022284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554-rootfs.mount: Deactivated successfully. Jun 20 19:16:11.029019 containerd[1493]: time="2025-06-20T19:16:11.028801517Z" level=info msg="shim disconnected" id=21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554 namespace=k8s.io Jun 20 19:16:11.029019 containerd[1493]: time="2025-06-20T19:16:11.028870241Z" level=warning msg="cleaning up after shim disconnected" id=21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554 namespace=k8s.io Jun 20 19:16:11.029019 containerd[1493]: time="2025-06-20T19:16:11.028878722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:11.037489 containerd[1493]: time="2025-06-20T19:16:11.037439036Z" level=info msg="StopContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" returns successfully" Jun 20 19:16:11.038969 containerd[1493]: time="2025-06-20T19:16:11.038878562Z" level=info msg="StopPodSandbox for \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\"" Jun 20 19:16:11.039254 containerd[1493]: time="2025-06-20T19:16:11.039024451Z" level=info msg="Container to stop \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.042426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230-shm.mount: Deactivated successfully. Jun 20 19:16:11.056042 systemd[1]: cri-containerd-e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230.scope: Deactivated successfully. Jun 20 19:16:11.065258 containerd[1493]: time="2025-06-20T19:16:11.065094816Z" level=info msg="StopContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" returns successfully" Jun 20 19:16:11.066950 containerd[1493]: time="2025-06-20T19:16:11.066883683Z" level=info msg="StopPodSandbox for \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\"" Jun 20 19:16:11.067109 containerd[1493]: time="2025-06-20T19:16:11.066965448Z" level=info msg="Container to stop \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.067109 containerd[1493]: time="2025-06-20T19:16:11.066990850Z" level=info msg="Container to stop \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.067109 containerd[1493]: time="2025-06-20T19:16:11.067011571Z" level=info msg="Container to stop \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.067109 containerd[1493]: time="2025-06-20T19:16:11.067036332Z" level=info msg="Container to stop \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.067109 containerd[1493]: time="2025-06-20T19:16:11.067055174Z" level=info msg="Container to stop \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:11.075204 systemd[1]: cri-containerd-96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32.scope: Deactivated successfully. Jun 20 19:16:11.111229 containerd[1493]: time="2025-06-20T19:16:11.110805000Z" level=info msg="shim disconnected" id=e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230 namespace=k8s.io Jun 20 19:16:11.111229 containerd[1493]: time="2025-06-20T19:16:11.110907646Z" level=warning msg="cleaning up after shim disconnected" id=e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230 namespace=k8s.io Jun 20 19:16:11.111229 containerd[1493]: time="2025-06-20T19:16:11.110920327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:11.112280 containerd[1493]: time="2025-06-20T19:16:11.112013753Z" level=info msg="shim disconnected" id=96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32 namespace=k8s.io Jun 20 19:16:11.112280 containerd[1493]: time="2025-06-20T19:16:11.112085517Z" level=warning msg="cleaning up after shim disconnected" id=96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32 namespace=k8s.io Jun 20 19:16:11.112280 containerd[1493]: time="2025-06-20T19:16:11.112101238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:11.130881 containerd[1493]: time="2025-06-20T19:16:11.130693794Z" level=info msg="TearDown network for sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" successfully" Jun 20 19:16:11.130881 containerd[1493]: time="2025-06-20T19:16:11.130736877Z" level=info msg="StopPodSandbox for \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" returns successfully" Jun 20 19:16:11.131907 containerd[1493]: time="2025-06-20T19:16:11.131525444Z" level=info msg="TearDown network for sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" successfully" Jun 20 19:16:11.131907 containerd[1493]: time="2025-06-20T19:16:11.131810581Z" level=info msg="StopPodSandbox for \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" returns successfully" Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.254905 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65062899-4aaa-4263-b605-324a1df5c558-cilium-config-path\") pod \"65062899-4aaa-4263-b605-324a1df5c558\" (UID: \"65062899-4aaa-4263-b605-324a1df5c558\") " Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.254975 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-hostproc\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.255015 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnd2d\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-kube-api-access-lnd2d\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.255051 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/977a182c-e23a-49dd-a385-900aa02f271f-cilium-config-path\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.255080 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-hubble-tls\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.256662 kubelet[2792]: I0620 19:16:11.255106 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-run\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255133 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-cgroup\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255161 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-kernel\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255238 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/977a182c-e23a-49dd-a385-900aa02f271f-clustermesh-secrets\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255267 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-bpf-maps\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255295 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-xtables-lock\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257163 kubelet[2792]: I0620 19:16:11.255323 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cni-path\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257312 kubelet[2792]: I0620 19:16:11.255352 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-lib-modules\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257312 kubelet[2792]: I0620 19:16:11.255403 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-net\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257312 kubelet[2792]: I0620 19:16:11.255441 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-etc-cni-netd\") pod \"977a182c-e23a-49dd-a385-900aa02f271f\" (UID: \"977a182c-e23a-49dd-a385-900aa02f271f\") " Jun 20 19:16:11.257312 kubelet[2792]: I0620 19:16:11.255474 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmkf\" (UniqueName: \"kubernetes.io/projected/65062899-4aaa-4263-b605-324a1df5c558-kube-api-access-mxmkf\") pod \"65062899-4aaa-4263-b605-324a1df5c558\" (UID: \"65062899-4aaa-4263-b605-324a1df5c558\") " Jun 20 19:16:11.257312 kubelet[2792]: I0620 19:16:11.256692 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-hostproc" (OuterVolumeSpecName: "hostproc") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.263720 kubelet[2792]: I0620 19:16:11.263170 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65062899-4aaa-4263-b605-324a1df5c558-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65062899-4aaa-4263-b605-324a1df5c558" (UID: "65062899-4aaa-4263-b605-324a1df5c558"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:16:11.264798 kubelet[2792]: I0620 19:16:11.264719 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.264798 kubelet[2792]: I0620 19:16:11.264768 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.265142 kubelet[2792]: I0620 19:16:11.264943 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.265142 kubelet[2792]: I0620 19:16:11.264979 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.265697 kubelet[2792]: I0620 19:16:11.265650 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.266341 kubelet[2792]: I0620 19:16:11.266018 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.266341 kubelet[2792]: I0620 19:16:11.266205 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-kube-api-access-lnd2d" (OuterVolumeSpecName: "kube-api-access-lnd2d") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "kube-api-access-lnd2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:11.266341 kubelet[2792]: I0620 19:16:11.266238 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.266341 kubelet[2792]: I0620 19:16:11.266250 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.266341 kubelet[2792]: I0620 19:16:11.266263 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cni-path" (OuterVolumeSpecName: "cni-path") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:11.266912 kubelet[2792]: I0620 19:16:11.266876 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977a182c-e23a-49dd-a385-900aa02f271f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:16:11.268977 kubelet[2792]: I0620 19:16:11.268930 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:11.269062 kubelet[2792]: I0620 19:16:11.269029 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65062899-4aaa-4263-b605-324a1df5c558-kube-api-access-mxmkf" (OuterVolumeSpecName: "kube-api-access-mxmkf") pod "65062899-4aaa-4263-b605-324a1df5c558" (UID: "65062899-4aaa-4263-b605-324a1df5c558"). InnerVolumeSpecName "kube-api-access-mxmkf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:11.269483 kubelet[2792]: I0620 19:16:11.269427 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977a182c-e23a-49dd-a385-900aa02f271f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "977a182c-e23a-49dd-a385-900aa02f271f" (UID: "977a182c-e23a-49dd-a385-900aa02f271f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.355965 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/977a182c-e23a-49dd-a385-900aa02f271f-cilium-config-path\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356040 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-hubble-tls\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356069 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-run\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356093 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cilium-cgroup\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356112 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-kernel\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356133 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/977a182c-e23a-49dd-a385-900aa02f271f-clustermesh-secrets\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356152 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-bpf-maps\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356436 kubelet[2792]: I0620 19:16:11.356174 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-xtables-lock\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356196 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-cni-path\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356215 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-lib-modules\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356233 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-host-proc-sys-net\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356260 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-etc-cni-netd\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356280 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxmkf\" (UniqueName: \"kubernetes.io/projected/65062899-4aaa-4263-b605-324a1df5c558-kube-api-access-mxmkf\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356305 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65062899-4aaa-4263-b605-324a1df5c558-cilium-config-path\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356324 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/977a182c-e23a-49dd-a385-900aa02f271f-hostproc\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.356876 kubelet[2792]: I0620 19:16:11.356345 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnd2d\" (UniqueName: \"kubernetes.io/projected/977a182c-e23a-49dd-a385-900aa02f271f-kube-api-access-lnd2d\") on node \"ci-4230-2-0-2-fda0fd8fee\" DevicePath \"\"" Jun 20 19:16:11.439596 kubelet[2792]: I0620 19:16:11.437977 2792 scope.go:117] "RemoveContainer" containerID="73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb" Jun 20 19:16:11.443335 containerd[1493]: time="2025-06-20T19:16:11.442912178Z" level=info msg="RemoveContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\"" Jun 20 19:16:11.446967 systemd[1]: Removed slice kubepods-besteffort-pod65062899_4aaa_4263_b605_324a1df5c558.slice - libcontainer container kubepods-besteffort-pod65062899_4aaa_4263_b605_324a1df5c558.slice. Jun 20 19:16:11.450392 containerd[1493]: time="2025-06-20T19:16:11.449941520Z" level=info msg="RemoveContainer for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" returns successfully" Jun 20 19:16:11.450510 kubelet[2792]: I0620 19:16:11.450269 2792 scope.go:117] "RemoveContainer" containerID="73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb" Jun 20 19:16:11.452060 containerd[1493]: time="2025-06-20T19:16:11.450810212Z" level=error msg="ContainerStatus for \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\": not found" Jun 20 19:16:11.452438 kubelet[2792]: E0620 19:16:11.452213 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\": not found" containerID="73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb" Jun 20 19:16:11.452438 kubelet[2792]: I0620 19:16:11.452255 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb"} err="failed to get container status \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"73ee50940bc25d14c86aa7fb9ff91b14ab3547e9f2cdc58aab8588ddbefee7cb\": not found" Jun 20 19:16:11.452438 kubelet[2792]: I0620 19:16:11.452312 2792 scope.go:117] "RemoveContainer" containerID="21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554" Jun 20 19:16:11.456889 containerd[1493]: time="2025-06-20T19:16:11.456854415Z" level=info msg="RemoveContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\"" Jun 20 19:16:11.457888 systemd[1]: Removed slice kubepods-burstable-pod977a182c_e23a_49dd_a385_900aa02f271f.slice - libcontainer container kubepods-burstable-pod977a182c_e23a_49dd_a385_900aa02f271f.slice. Jun 20 19:16:11.458174 systemd[1]: kubepods-burstable-pod977a182c_e23a_49dd_a385_900aa02f271f.slice: Consumed 7.966s CPU time, 125.1M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 19:16:11.462694 containerd[1493]: time="2025-06-20T19:16:11.462637922Z" level=info msg="RemoveContainer for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" returns successfully" Jun 20 19:16:11.463502 kubelet[2792]: I0620 19:16:11.463455 2792 scope.go:117] "RemoveContainer" containerID="a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf" Jun 20 19:16:11.465157 containerd[1493]: time="2025-06-20T19:16:11.465088349Z" level=info msg="RemoveContainer for \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\"" Jun 20 19:16:11.471646 containerd[1493]: time="2025-06-20T19:16:11.471441530Z" level=info msg="RemoveContainer for \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\" returns successfully" Jun 20 19:16:11.472239 kubelet[2792]: I0620 19:16:11.472036 2792 scope.go:117] "RemoveContainer" containerID="51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea" Jun 20 19:16:11.474428 containerd[1493]: time="2025-06-20T19:16:11.474314343Z" level=info msg="RemoveContainer for \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\"" Jun 20 19:16:11.479716 containerd[1493]: time="2025-06-20T19:16:11.478451871Z" level=info msg="RemoveContainer for \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\" returns successfully" Jun 20 19:16:11.479809 kubelet[2792]: I0620 19:16:11.478800 2792 scope.go:117] "RemoveContainer" containerID="e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce" Jun 20 19:16:11.482355 containerd[1493]: time="2025-06-20T19:16:11.481429890Z" level=info msg="RemoveContainer for \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\"" Jun 20 19:16:11.486115 containerd[1493]: time="2025-06-20T19:16:11.486079769Z" level=info msg="RemoveContainer for \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\" returns successfully" Jun 20 19:16:11.486485 kubelet[2792]: I0620 19:16:11.486460 2792 scope.go:117] "RemoveContainer" containerID="f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9" Jun 20 19:16:11.488204 containerd[1493]: time="2025-06-20T19:16:11.488163534Z" level=info msg="RemoveContainer for \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\"" Jun 20 19:16:11.493655 containerd[1493]: time="2025-06-20T19:16:11.493292522Z" level=info msg="RemoveContainer for \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\" returns successfully" Jun 20 19:16:11.494417 kubelet[2792]: I0620 19:16:11.493512 2792 scope.go:117] "RemoveContainer" containerID="21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554" Jun 20 19:16:11.494791 containerd[1493]: time="2025-06-20T19:16:11.494689246Z" level=error msg="ContainerStatus for \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\": not found" Jun 20 19:16:11.495160 kubelet[2792]: E0620 19:16:11.495095 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\": not found" containerID="21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554" Jun 20 19:16:11.495160 kubelet[2792]: I0620 19:16:11.495147 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554"} err="failed to get container status \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\": rpc error: code = NotFound desc = an error occurred when try to find container \"21a120930a932a5ed1595c73a7e9782b3902329b3bfd21ee70d3ab7909142554\": not found" Jun 20 19:16:11.495160 kubelet[2792]: I0620 19:16:11.495168 2792 scope.go:117] "RemoveContainer" containerID="a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf" Jun 20 19:16:11.496178 containerd[1493]: time="2025-06-20T19:16:11.495812154Z" level=error msg="ContainerStatus for \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\": not found" Jun 20 19:16:11.496313 kubelet[2792]: E0620 19:16:11.496000 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\": not found" containerID="a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf" Jun 20 19:16:11.496313 kubelet[2792]: I0620 19:16:11.496025 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf"} err="failed to get container status \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"a86e9d2267f6f0788835f03954e6a1a15e98c785dbc3597fbd0b57949253cdbf\": not found" Jun 20 19:16:11.496313 kubelet[2792]: I0620 19:16:11.496043 2792 scope.go:117] "RemoveContainer" containerID="51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea" Jun 20 19:16:11.497507 containerd[1493]: time="2025-06-20T19:16:11.497241559Z" level=error msg="ContainerStatus for \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\": not found" Jun 20 19:16:11.498099 kubelet[2792]: E0620 19:16:11.497676 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\": not found" containerID="51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea" Jun 20 19:16:11.498099 kubelet[2792]: I0620 19:16:11.497716 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea"} err="failed to get container status \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"51acad2b7f29cf83b457479acbaf48ed6848e3831c9af522c7b08fc1e22293ea\": not found" Jun 20 19:16:11.498099 kubelet[2792]: I0620 19:16:11.497744 2792 scope.go:117] "RemoveContainer" containerID="e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce" Jun 20 19:16:11.498920 containerd[1493]: time="2025-06-20T19:16:11.498521156Z" level=error msg="ContainerStatus for \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\": not found" Jun 20 19:16:11.499081 kubelet[2792]: E0620 19:16:11.498816 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\": not found" containerID="e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce" Jun 20 19:16:11.499462 kubelet[2792]: I0620 19:16:11.498870 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce"} err="failed to get container status \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"e947d38b7a2dd0f1c07bfeb8dff17791d75730696ce9668de82b2a580de3b6ce\": not found" Jun 20 19:16:11.499462 kubelet[2792]: I0620 19:16:11.499226 2792 scope.go:117] "RemoveContainer" containerID="f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9" Jun 20 19:16:11.500022 containerd[1493]: time="2025-06-20T19:16:11.499916440Z" level=error msg="ContainerStatus for \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\": not found" Jun 20 19:16:11.500368 kubelet[2792]: E0620 19:16:11.500088 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\": not found" containerID="f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9" Jun 20 19:16:11.500368 kubelet[2792]: I0620 19:16:11.500119 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9"} err="failed to get container status \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5df9cbf8e5d30d0d967a5b2321eb5dfe5bbd71fdf3c0e6739dd9c4c2ba472c9\": not found" Jun 20 19:16:11.916761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230-rootfs.mount: Deactivated successfully. Jun 20 19:16:11.916946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32-rootfs.mount: Deactivated successfully. Jun 20 19:16:11.917053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32-shm.mount: Deactivated successfully. Jun 20 19:16:11.917171 systemd[1]: var-lib-kubelet-pods-65062899\x2d4aaa\x2d4263\x2db605\x2d324a1df5c558-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxmkf.mount: Deactivated successfully. Jun 20 19:16:11.917277 systemd[1]: var-lib-kubelet-pods-977a182c\x2de23a\x2d49dd\x2da385\x2d900aa02f271f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlnd2d.mount: Deactivated successfully. Jun 20 19:16:11.917428 systemd[1]: var-lib-kubelet-pods-977a182c\x2de23a\x2d49dd\x2da385\x2d900aa02f271f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:16:11.917568 systemd[1]: var-lib-kubelet-pods-977a182c\x2de23a\x2d49dd\x2da385\x2d900aa02f271f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:16:12.515379 kubelet[2792]: I0620 19:16:12.515310 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65062899-4aaa-4263-b605-324a1df5c558" path="/var/lib/kubelet/pods/65062899-4aaa-4263-b605-324a1df5c558/volumes" Jun 20 19:16:12.516211 kubelet[2792]: I0620 19:16:12.516159 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977a182c-e23a-49dd-a385-900aa02f271f" path="/var/lib/kubelet/pods/977a182c-e23a-49dd-a385-900aa02f271f/volumes" Jun 20 19:16:13.003788 sshd[4411]: Connection closed by 147.75.109.163 port 42978 Jun 20 19:16:13.005878 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:13.012503 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:16:13.013920 systemd[1]: sshd@20-168.119.177.47:22-147.75.109.163:42978.service: Deactivated successfully. Jun 20 19:16:13.016100 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:16:13.016321 systemd[1]: session-20.scope: Consumed 1.067s CPU time, 23.6M memory peak. Jun 20 19:16:13.017260 systemd-logind[1473]: Removed session 20. Jun 20 19:16:13.193010 systemd[1]: Started sshd@21-168.119.177.47:22-147.75.109.163:42980.service - OpenSSH per-connection server daemon (147.75.109.163:42980). Jun 20 19:16:13.704378 kubelet[2792]: E0620 19:16:13.704253 2792 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:16:14.186884 sshd[4573]: Accepted publickey for core from 147.75.109.163 port 42980 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:14.188753 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:14.193746 systemd-logind[1473]: New session 21 of user core. Jun 20 19:16:14.201100 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:16:14.715625 kubelet[2792]: I0620 19:16:14.714700 2792 setters.go:618] "Node became not ready" node="ci-4230-2-0-2-fda0fd8fee" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:16:14Z","lastTransitionTime":"2025-06-20T19:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:16:16.059510 systemd[1]: Created slice kubepods-burstable-podb70d573d_fdda_4bad_9909_0a2587d2ed2f.slice - libcontainer container kubepods-burstable-podb70d573d_fdda_4bad_9909_0a2587d2ed2f.slice. Jun 20 19:16:16.085356 kubelet[2792]: I0620 19:16:16.085310 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-cilium-run\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.085356 kubelet[2792]: I0620 19:16:16.085363 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-hostproc\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.085919 kubelet[2792]: I0620 19:16:16.085382 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-xtables-lock\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.085919 kubelet[2792]: I0620 19:16:16.085398 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-cilium-cgroup\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.085414 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-lib-modules\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.086664 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b70d573d-fdda-4bad-9909-0a2587d2ed2f-hubble-tls\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.086696 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b70d573d-fdda-4bad-9909-0a2587d2ed2f-cilium-ipsec-secrets\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.086712 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-cni-path\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.086725 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-etc-cni-netd\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.086902 kubelet[2792]: I0620 19:16:16.086739 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b70d573d-fdda-4bad-9909-0a2587d2ed2f-clustermesh-secrets\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.087137 kubelet[2792]: I0620 19:16:16.086760 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-host-proc-sys-net\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.087137 kubelet[2792]: I0620 19:16:16.086781 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxswk\" (UniqueName: \"kubernetes.io/projected/b70d573d-fdda-4bad-9909-0a2587d2ed2f-kube-api-access-wxswk\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.087137 kubelet[2792]: I0620 19:16:16.086797 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-bpf-maps\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.087137 kubelet[2792]: I0620 19:16:16.086812 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b70d573d-fdda-4bad-9909-0a2587d2ed2f-cilium-config-path\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.087137 kubelet[2792]: I0620 19:16:16.086827 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b70d573d-fdda-4bad-9909-0a2587d2ed2f-host-proc-sys-kernel\") pod \"cilium-knr8z\" (UID: \"b70d573d-fdda-4bad-9909-0a2587d2ed2f\") " pod="kube-system/cilium-knr8z" Jun 20 19:16:16.192376 sshd[4575]: Connection closed by 147.75.109.163 port 42980 Jun 20 19:16:16.193154 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:16.207812 systemd[1]: sshd@21-168.119.177.47:22-147.75.109.163:42980.service: Deactivated successfully. Jun 20 19:16:16.223045 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:16:16.223974 systemd[1]: session-21.scope: Consumed 1.179s CPU time, 25.3M memory peak. Jun 20 19:16:16.227838 systemd-logind[1473]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:16:16.230997 systemd-logind[1473]: Removed session 21. Jun 20 19:16:16.367282 containerd[1493]: time="2025-06-20T19:16:16.366808141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-knr8z,Uid:b70d573d-fdda-4bad-9909-0a2587d2ed2f,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:16.371103 systemd[1]: Started sshd@22-168.119.177.47:22-147.75.109.163:43068.service - OpenSSH per-connection server daemon (147.75.109.163:43068). Jun 20 19:16:16.402000 containerd[1493]: time="2025-06-20T19:16:16.401715766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:16:16.402000 containerd[1493]: time="2025-06-20T19:16:16.401790611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:16:16.402000 containerd[1493]: time="2025-06-20T19:16:16.401808292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:16:16.402000 containerd[1493]: time="2025-06-20T19:16:16.401894457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:16:16.428900 systemd[1]: Started cri-containerd-37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d.scope - libcontainer container 37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d. Jun 20 19:16:16.460187 containerd[1493]: time="2025-06-20T19:16:16.459767827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-knr8z,Uid:b70d573d-fdda-4bad-9909-0a2587d2ed2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\"" Jun 20 19:16:16.470376 containerd[1493]: time="2025-06-20T19:16:16.470110371Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:16:16.485714 containerd[1493]: time="2025-06-20T19:16:16.485659909Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253\"" Jun 20 19:16:16.488431 containerd[1493]: time="2025-06-20T19:16:16.488370512Z" level=info msg="StartContainer for \"64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253\"" Jun 20 19:16:16.523788 systemd[1]: Started cri-containerd-64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253.scope - libcontainer container 64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253. Jun 20 19:16:16.565768 containerd[1493]: time="2025-06-20T19:16:16.565583368Z" level=info msg="StartContainer for \"64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253\" returns successfully" Jun 20 19:16:16.589899 systemd[1]: cri-containerd-64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253.scope: Deactivated successfully. Jun 20 19:16:16.630523 containerd[1493]: time="2025-06-20T19:16:16.630087618Z" level=info msg="shim disconnected" id=64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253 namespace=k8s.io Jun 20 19:16:16.630523 containerd[1493]: time="2025-06-20T19:16:16.630225226Z" level=warning msg="cleaning up after shim disconnected" id=64b291abe72ea5ced41bf25315858c31440065fbb39a71e6f62cd6506f980253 namespace=k8s.io Jun 20 19:16:16.630523 containerd[1493]: time="2025-06-20T19:16:16.630238627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:17.364493 sshd[4589]: Accepted publickey for core from 147.75.109.163 port 43068 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:17.366635 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:17.373248 systemd-logind[1473]: New session 22 of user core. Jun 20 19:16:17.379821 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:16:17.479687 containerd[1493]: time="2025-06-20T19:16:17.479347335Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:16:17.496623 containerd[1493]: time="2025-06-20T19:16:17.496300038Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b\"" Jun 20 19:16:17.499725 containerd[1493]: time="2025-06-20T19:16:17.499640479Z" level=info msg="StartContainer for \"5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b\"" Jun 20 19:16:17.530770 systemd[1]: Started cri-containerd-5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b.scope - libcontainer container 5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b. Jun 20 19:16:17.564961 containerd[1493]: time="2025-06-20T19:16:17.563934200Z" level=info msg="StartContainer for \"5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b\" returns successfully" Jun 20 19:16:17.574895 systemd[1]: cri-containerd-5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b.scope: Deactivated successfully. Jun 20 19:16:17.615916 containerd[1493]: time="2025-06-20T19:16:17.615448349Z" level=info msg="shim disconnected" id=5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b namespace=k8s.io Jun 20 19:16:17.615916 containerd[1493]: time="2025-06-20T19:16:17.615506152Z" level=warning msg="cleaning up after shim disconnected" id=5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b namespace=k8s.io Jun 20 19:16:17.615916 containerd[1493]: time="2025-06-20T19:16:17.615515473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:18.040971 sshd[4694]: Connection closed by 147.75.109.163 port 43068 Jun 20 19:16:18.040684 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:18.047178 systemd[1]: sshd@22-168.119.177.47:22-147.75.109.163:43068.service: Deactivated successfully. Jun 20 19:16:18.050295 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:16:18.051691 systemd-logind[1473]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:16:18.053308 systemd-logind[1473]: Removed session 22. Jun 20 19:16:18.217223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ff2b6a83760a7f91d0c2fc9214e36e363482cc4f4dfeefa21276f08c8760c2b-rootfs.mount: Deactivated successfully. Jun 20 19:16:18.225969 systemd[1]: Started sshd@23-168.119.177.47:22-147.75.109.163:43078.service - OpenSSH per-connection server daemon (147.75.109.163:43078). Jun 20 19:16:18.482508 containerd[1493]: time="2025-06-20T19:16:18.481930388Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:16:18.499949 containerd[1493]: time="2025-06-20T19:16:18.499869912Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db\"" Jun 20 19:16:18.502051 containerd[1493]: time="2025-06-20T19:16:18.501915355Z" level=info msg="StartContainer for \"4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db\"" Jun 20 19:16:18.543780 systemd[1]: Started cri-containerd-4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db.scope - libcontainer container 4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db. Jun 20 19:16:18.585400 containerd[1493]: time="2025-06-20T19:16:18.585094380Z" level=info msg="StartContainer for \"4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db\" returns successfully" Jun 20 19:16:18.591049 systemd[1]: cri-containerd-4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db.scope: Deactivated successfully. Jun 20 19:16:18.632663 containerd[1493]: time="2025-06-20T19:16:18.632434039Z" level=info msg="shim disconnected" id=4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db namespace=k8s.io Jun 20 19:16:18.632663 containerd[1493]: time="2025-06-20T19:16:18.632536005Z" level=warning msg="cleaning up after shim disconnected" id=4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db namespace=k8s.io Jun 20 19:16:18.632663 containerd[1493]: time="2025-06-20T19:16:18.632588488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:18.705915 kubelet[2792]: E0620 19:16:18.705820 2792 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:16:19.211061 sshd[4763]: Accepted publickey for core from 147.75.109.163 port 43078 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:19.213070 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:19.215384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f7de1d6f86d6531afa26e05d15e25649a5efc0d6ea8571f3effe86ed6c5b5db-rootfs.mount: Deactivated successfully. Jun 20 19:16:19.222322 systemd-logind[1473]: New session 23 of user core. Jun 20 19:16:19.226782 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:16:19.492620 containerd[1493]: time="2025-06-20T19:16:19.491310503Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:16:19.519804 containerd[1493]: time="2025-06-20T19:16:19.518913172Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d\"" Jun 20 19:16:19.520005 containerd[1493]: time="2025-06-20T19:16:19.519861109Z" level=info msg="StartContainer for \"04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d\"" Jun 20 19:16:19.557182 systemd[1]: Started cri-containerd-04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d.scope - libcontainer container 04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d. Jun 20 19:16:19.588950 systemd[1]: cri-containerd-04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d.scope: Deactivated successfully. Jun 20 19:16:19.595637 containerd[1493]: time="2025-06-20T19:16:19.595433477Z" level=info msg="StartContainer for \"04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d\" returns successfully" Jun 20 19:16:19.623631 containerd[1493]: time="2025-06-20T19:16:19.623519855Z" level=info msg="shim disconnected" id=04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d namespace=k8s.io Jun 20 19:16:19.623631 containerd[1493]: time="2025-06-20T19:16:19.623618701Z" level=warning msg="cleaning up after shim disconnected" id=04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d namespace=k8s.io Jun 20 19:16:19.623892 containerd[1493]: time="2025-06-20T19:16:19.623642703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:20.216651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04d9a191dde49158a46b0759e4691146b8b516f9bf1de0f4c473f89484e9eb8d-rootfs.mount: Deactivated successfully. Jun 20 19:16:20.495948 containerd[1493]: time="2025-06-20T19:16:20.495847695Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:16:20.528269 containerd[1493]: time="2025-06-20T19:16:20.527752665Z" level=info msg="CreateContainer within sandbox \"37e8d30811ad7cf70321d717907dc0262666a441858d2ca4550e816fe852c07d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34033798c10437263ed243a2058bd2aeea0f9c66271d4b710bc92b7e82f8e95c\"" Jun 20 19:16:20.528782 containerd[1493]: time="2025-06-20T19:16:20.528753446Z" level=info msg="StartContainer for \"34033798c10437263ed243a2058bd2aeea0f9c66271d4b710bc92b7e82f8e95c\"" Jun 20 19:16:20.560684 systemd[1]: Started cri-containerd-34033798c10437263ed243a2058bd2aeea0f9c66271d4b710bc92b7e82f8e95c.scope - libcontainer container 34033798c10437263ed243a2058bd2aeea0f9c66271d4b710bc92b7e82f8e95c. Jun 20 19:16:20.597304 containerd[1493]: time="2025-06-20T19:16:20.597156944Z" level=info msg="StartContainer for \"34033798c10437263ed243a2058bd2aeea0f9c66271d4b710bc92b7e82f8e95c\" returns successfully" Jun 20 19:16:20.935591 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 19:16:21.519689 kubelet[2792]: I0620 19:16:21.519364 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-knr8z" podStartSLOduration=5.519346004 podStartE2EDuration="5.519346004s" podCreationTimestamp="2025-06-20 19:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:21.519053546 +0000 UTC m=+343.115006773" watchObservedRunningTime="2025-06-20 19:16:21.519346004 +0000 UTC m=+343.115299271" Jun 20 19:16:23.930725 systemd-networkd[1398]: lxc_health: Link UP Jun 20 19:16:23.936686 systemd-networkd[1398]: lxc_health: Gained carrier Jun 20 19:16:25.546757 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jun 20 19:16:28.520447 kubelet[2792]: E0620 19:16:28.520113 2792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55272->127.0.0.1:45797: write tcp 127.0.0.1:55272->127.0.0.1:45797: write: connection reset by peer Jun 20 19:16:30.831307 sshd[4821]: Connection closed by 147.75.109.163 port 43078 Jun 20 19:16:30.832389 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:30.839906 systemd[1]: sshd@23-168.119.177.47:22-147.75.109.163:43078.service: Deactivated successfully. Jun 20 19:16:30.842678 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:16:30.846832 systemd-logind[1473]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:16:30.848753 systemd-logind[1473]: Removed session 23. Jun 20 19:16:38.543165 containerd[1493]: time="2025-06-20T19:16:38.543097262Z" level=info msg="StopPodSandbox for \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\"" Jun 20 19:16:38.543806 containerd[1493]: time="2025-06-20T19:16:38.543213909Z" level=info msg="TearDown network for sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" successfully" Jun 20 19:16:38.543806 containerd[1493]: time="2025-06-20T19:16:38.543226910Z" level=info msg="StopPodSandbox for \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" returns successfully" Jun 20 19:16:38.544076 containerd[1493]: time="2025-06-20T19:16:38.544013158Z" level=info msg="RemovePodSandbox for \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\"" Jun 20 19:16:38.544076 containerd[1493]: time="2025-06-20T19:16:38.544053361Z" level=info msg="Forcibly stopping sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\"" Jun 20 19:16:38.544184 containerd[1493]: time="2025-06-20T19:16:38.544115404Z" level=info msg="TearDown network for sandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" successfully" Jun 20 19:16:38.548803 containerd[1493]: time="2025-06-20T19:16:38.548602999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:16:38.548803 containerd[1493]: time="2025-06-20T19:16:38.548675044Z" level=info msg="RemovePodSandbox \"96543cc70d55876dd855aaf1fe838d6cca00e587d9bd5a1c2aa47c50d9468f32\" returns successfully" Jun 20 19:16:38.549479 containerd[1493]: time="2025-06-20T19:16:38.549153273Z" level=info msg="StopPodSandbox for \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\"" Jun 20 19:16:38.549479 containerd[1493]: time="2025-06-20T19:16:38.549242798Z" level=info msg="TearDown network for sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" successfully" Jun 20 19:16:38.549479 containerd[1493]: time="2025-06-20T19:16:38.549255559Z" level=info msg="StopPodSandbox for \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" returns successfully" Jun 20 19:16:38.549655 containerd[1493]: time="2025-06-20T19:16:38.549592540Z" level=info msg="RemovePodSandbox for \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\"" Jun 20 19:16:38.549655 containerd[1493]: time="2025-06-20T19:16:38.549619342Z" level=info msg="Forcibly stopping sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\"" Jun 20 19:16:38.549738 containerd[1493]: time="2025-06-20T19:16:38.549669785Z" level=info msg="TearDown network for sandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" successfully" Jun 20 19:16:38.553378 containerd[1493]: time="2025-06-20T19:16:38.553321528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:16:38.553492 containerd[1493]: time="2025-06-20T19:16:38.553385812Z" level=info msg="RemovePodSandbox \"e92226c022081a441c4017c8e25042c684d42d1ee23daf11abdbe4eacf859230\" returns successfully" Jun 20 19:16:45.750859 kubelet[2792]: E0620 19:16:45.750352 2792 controller.go:195] "Failed to update lease" err="Put \"https://168.119.177.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-2-fda0fd8fee?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 20 19:16:46.102012 systemd[1]: cri-containerd-86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a.scope: Deactivated successfully. Jun 20 19:16:46.103363 systemd[1]: cri-containerd-86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a.scope: Consumed 6.834s CPU time, 60.2M memory peak. Jun 20 19:16:46.129315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a-rootfs.mount: Deactivated successfully. Jun 20 19:16:46.138878 containerd[1493]: time="2025-06-20T19:16:46.138528697Z" level=info msg="shim disconnected" id=86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a namespace=k8s.io Jun 20 19:16:46.138878 containerd[1493]: time="2025-06-20T19:16:46.138666249Z" level=warning msg="cleaning up after shim disconnected" id=86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a namespace=k8s.io Jun 20 19:16:46.138878 containerd[1493]: time="2025-06-20T19:16:46.138679688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:46.224978 kubelet[2792]: E0620 19:16:46.224116 2792 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47008->10.0.0.2:2379: read: connection timed out" Jun 20 19:16:46.570493 kubelet[2792]: I0620 19:16:46.570327 2792 scope.go:117] "RemoveContainer" containerID="86b248971a87831acc941274d1d1bfd08634895c475b1d13fab64743b81bf09a" Jun 20 19:16:46.573960 containerd[1493]: time="2025-06-20T19:16:46.573900999Z" level=info msg="CreateContainer within sandbox \"6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 19:16:46.591065 containerd[1493]: time="2025-06-20T19:16:46.590943157Z" level=info msg="CreateContainer within sandbox \"6a04995d979a20ddfe5ee76f1ccad427230f5bd9e46267d4e334c5a8e753364f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f0db73a1678f4f1fc7d45c12849cd473b8949d362758c5c54527974925ada90\"" Jun 20 19:16:46.591583 containerd[1493]: time="2025-06-20T19:16:46.591397809Z" level=info msg="StartContainer for \"2f0db73a1678f4f1fc7d45c12849cd473b8949d362758c5c54527974925ada90\"" Jun 20 19:16:46.631759 systemd[1]: Started cri-containerd-2f0db73a1678f4f1fc7d45c12849cd473b8949d362758c5c54527974925ada90.scope - libcontainer container 2f0db73a1678f4f1fc7d45c12849cd473b8949d362758c5c54527974925ada90. Jun 20 19:16:46.675097 containerd[1493]: time="2025-06-20T19:16:46.675048415Z" level=info msg="StartContainer for \"2f0db73a1678f4f1fc7d45c12849cd473b8949d362758c5c54527974925ada90\" returns successfully" Jun 20 19:16:51.022700 kubelet[2792]: E0620 19:16:51.021333 2792 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46810->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-0-2-fda0fd8fee.184ad645d3004671 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-0-2-fda0fd8fee,UID:e561535f9f940b17ad94415027903bc7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-2-fda0fd8fee,},FirstTimestamp:2025-06-20 19:16:40.567957105 +0000 UTC m=+362.163910412,LastTimestamp:2025-06-20 19:16:40.567957105 +0000 UTC m=+362.163910412,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-2-fda0fd8fee,}" Jun 20 19:16:51.693956 systemd[1]: cri-containerd-401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634.scope: Deactivated successfully. Jun 20 19:16:51.694940 systemd[1]: cri-containerd-401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634.scope: Consumed 5.078s CPU time, 22.3M memory peak. Jun 20 19:16:51.724146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634-rootfs.mount: Deactivated successfully. Jun 20 19:16:51.731730 containerd[1493]: time="2025-06-20T19:16:51.731615384Z" level=info msg="shim disconnected" id=401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634 namespace=k8s.io Jun 20 19:16:51.731730 containerd[1493]: time="2025-06-20T19:16:51.731716978Z" level=warning msg="cleaning up after shim disconnected" id=401ca4401dc4e18ae8f5f9170e7daf13164a89cd09f05b54ac12072d5534d634 namespace=k8s.io Jun 20 19:16:51.732250 containerd[1493]: time="2025-06-20T19:16:51.731746937Z" level=info msg="cleaning up dead shim" namespace=k8s.io